Practice Test #4 - AWS Certified Cloud Practitioner - Results (Stephen) Flashcards
AWS Organizations provides which of the following benefits? (Select two)
A. Volume discounts for Amazon EC2 and Amazon S3 aggregated across the member AWS accounts
B. Deploy patches on EC2 instances across the member AWS accounts
C. Share the reserved EC2 instances amongst the member AWS accounts
D. Provision EC2 Spot Instances across the member AWS accounts
E. Check vulnerabilities on EC2 instances across the member AWS accounts
A. Volume discounts for Amazon EC2 and Amazon S3 aggregated across the member AWS accounts
C. Share the reserved EC2 instances amongst the member AWS accounts
Explanation:
Volume discounts for Amazon EC2 and Amazon S3 aggregated across the member AWS accounts
Share the reserved EC2 instances amongst the member AWS accounts
AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources such as reserved EC2 instances across your AWS accounts.
Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.
You can use AWS Organizations to set up a single payment method for all the AWS accounts in your organization through consolidated billing. With consolidated billing, you can see a combined view of charges incurred by all your accounts, as well as take advantage of pricing benefits from aggregated usage, such as volume discounts for Amazon EC2 and Amazon S3.
Key benefits of AWS Organizations: via - https://aws.amazon.com/organizations/
Incorrect options:
Check vulnerabilities on EC2 instances across the member AWS accounts
Deploy patches on EC2 instances across the member AWS accounts
Provision EC2 Spot instances across the member AWS accounts
How is Amazon EC2 different from traditional hosting systems? (Select two)
A. Amazon EC2 caters more towards groups of users with similar system requirements so that the server resources are shared across multiple users and the cost is reduced
B. With Amazon EC2, developers can launch and terminate the instances anytime they need to
C. Amazon EC2 provides a pre-configured instance for a fixed monthly cost
D. Amazon EC2 can scale with changing computing requirements
E. With Amazon EC2, users risk overbuying resources
B. With Amazon EC2, developers can launch and terminate the instances anytime they need to
D. Amazon EC2 can scale with changing computing requirements
Explanation:
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS.
Amazon EC2 differs fundamentally with the traditional on-premises hosting systems in the flexibility, control and significant cost savings it offers developers, allowing them to treat Amazon EC2 instance as their own customized server backed by the robust infrastructure of AWS Cloud.
Amazon EC2 can scale with changing computing requirements - When computing requirements unexpectedly change, Amazon EC2 can be scaled to match the requirements. Developers can control how many EC2 instances are in use at any given point in time.
With Amazon EC2, developers can launch and terminate the instances anytime they need to - Using Amazon EC2, developers can choose not only to launch, terminate, start or shut down instances at any time, but they can also completely customize the configuration of their instances to suit their needs.
Incorrect options:
Amazon EC2 provides a pre-configured instance for a fixed monthly cost - This is an incorrect option. EC2 developers enjoy the benefit of paying only for their actual resource consumption with no monthly or upfront costs. Developers can customize their EC2 instances for their application stack.
With Amazon EC2, users risk overbuying resources - This is an incorrect statement. Users risk overbuying in traditional hosting services where users pay a fixed, up-front fee irrespective of their actual computing power used. With EC2, users pay only for the actual resources consumed.
Amazon EC2 caters more towards groups of users with similar system requirements so that the server resources are shared amongst multiple users and the cost is reduced - This is an incorrect statement. Resources are not shared between users in EC2, which is why the users have the flexibility to start or shutdown the instances as per their requirement. This is not possible for the traditional hosting systems where the resources are shared across users.
An e-commerce company would like to receive alerts when the Reserved EC2 Instances utilization drops below a certain threshold. Which AWS service can be used to address this use-case?
A. AWS Trusted Advisor
B. AWS Systems Manager
C. AWS Budgets
D. AWS Cost Explorer
C. AWS Budgets
Explanation:
AWS Budgets
AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. You can define a utilization threshold and receive alerts when your RI usage falls below that threshold. This lets you see if your RIs are unused or under-utilized. Reservation alerts are supported for Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache, and Amazon Elasticsearch reservations.
AWS Budgets Overview: via - https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/budgets-managing-costs.html
Incorrect options:
AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.
AWS Cost Explorer - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. Cost Explorer cannot be used to identify under-utilized EC2 instances.
AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks such as running commands, managing patches, and configuring servers across AWS Cloud as well as on-premises infrastructure.
Which of the following AWS services can be used to forecast your AWS account usage and costs?
A. AWS Pricing Calculator
B. AWS Budgets
C. AWS Cost and Usage Reports
D. AWS Cost Explorer
D. AWS Cost Explorer
Explanation:
AWS Cost Explorer
AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. AWS Cost Explorer also supports forecasting to get a better idea of what your costs and usage may look like in the future so that you can plan.
AWS Cost Explorer Features: via - https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
Incorrect options:
AWS Cost and Usage Reports - The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in a comma-separated value (CSV) format. AWS Cost and Usage Reports cannot forecast your AWS account cost and usage.
AWS Budgets - AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. AWS Budgets cannot forecast your AWS account cost and usage.
AWS Pricing Calculator - AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of your use cases on AWS. You can model your solutions before building them, explore the price points and calculations behind your estimate, and find the available instance types and contract terms that meet your needs. This enables you to make informed decisions about using AWS. You can plan your AWS costs and usage or price out setting up a new set of instances and services. You cannot use this service to forecast your AWS account cost and usage.
Which of the following entities can be used to connect to an EC2 server from a Mac OS, Windows or Linux based computer via a browser-based client?
A. Putty
B. EC2 Instance Connect
C. SSH
D. AWS Direct Connect
B. EC2 Instance Connect
Explanation:
EC2 Instance Connect
Amazon EC2 Instance Connect provides a simple and secure way to connect to your instances using Secure Shell (SSH). With EC2 Instance Connect, you use AWS Identity and Access Management (IAM) policies and principals to control SSH access to your instances, removing the need to share and manage SSH keys. All connection requests using EC2 Instance Connect are logged to AWS CloudTrail so that you can audit connection requests.
You can use Instance Connect to connect to your Linux instances using a browser-based client, the Amazon EC2 Instance Connect CLI, or the SSH client of your choice. EC2 Instance Connect can be used to connect to an EC2 instance from a Mac OS, Windows or Linux based computer.
Incorrect options:
SSH - SSH can be used from a Mac OS, Windows or Linux based computer, but it’s not a browser-based client.
Putty - Putty can be used only from Windows based computers.
AWS Direct Connect - AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC. This private connection takes at least one month for completion. Direct Connect cannot be used to connect to an EC2 instance from a Mac OS, Windows or Linux based computer.
A social media analytics company wants to migrate to a serverless stack on AWS. Which of the following scenarios can be handled by AWS Lambda? (Select two)
A. Lambda can be used to execute code in response to events such as updated to DynamoDB tables
B. You can install Container Services on Lambda
C. Lambda can be used to store sensitive environment variables
D. Lambda can be used for preprocessing of data before it is stored in Amazon S3 buckets
E. You can install low latency databases on Lambda
A. Lambda can be used to execute code in response to events such as updated to DynamoDB tables
D. Lambda can be used for preprocessing of data before it is stored in Amazon S3 buckets
Explanation:
AWS Lambda lets you run code without provisioning or managing servers (Lambda is serverless). With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. This functionality makes it an extremely useful service capable of being a serverless backend for websites, data preprocessing, real-time data transformations when used with streaming data, etc.
How Lambda Works: via - https://aws.amazon.com/lambda/
Lambda can be used to execute code in response to events such as updates to DynamoDB tables - Lambda can be configured to execute code in response to events, such as changes to Amazon S3 buckets, updates to an Amazon DynamoDB table, or custom events generated by your applications or devices.
Lambda can be used for preprocessing of data before it is stored in Amazon S3 buckets - Lambda can be used to run preprocessing scripts to filter, sort or transform data before sending it to downstream applications/services.
Incorrect options:
You can install low latency databases on Lambda - Lambda is serverless, so the underlying hardware and its working is not exposed to the customer. Installing software is not possible since we do not have access to the actual physical server on which Lambda executes the code.
You can install Container Services on Lambda - As discussed above, Lambda cannot be used for installing any software, since the underlying hardware/software might change for each request. But, it is possible to set an environment with necessary libraries when running scripts on Lambda.
Lambda can be used to store sensitive environment variables - Lambda is not a storage service and does not offer capabilities to store data. However, it is possible to read and decrypt/encrypt data using scripts in Lambda.
Which of the following are the serverless computing services offered by AWS (Select two)
A. AWS Elastic Beanstalk
B. AWS Lambda
C. Amazon Elastic Compute Cloud (EC2)
D. AWS Fargate
E. Amazon Lightsail
B. AWS Lambda
D. AWS Fargate
Explanation:
Serverless is the native architecture of the cloud that enables you to shift more of your operational responsibilities to AWS, increasing your agility and innovation. Serverless allows you to build and run applications and services without thinking about servers. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning.
The AWS serverless platform overview: via - https://aws.amazon.com/serverless/
AWS Lambda - With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running.
AWS Fargate - AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
AWS Fargate is a purpose-built serverless compute engine for containers. Fargate scales and manages the infrastructure required to run your containers.
Incorrect options:
Amazon Elastic Compute Cloud (EC2) - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.
AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. You simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. Beanstalk provisions servers so it is not a serverless service.
Amazon Lightsail - Lightsail is an easy-to-use cloud platform that offers you everything needed to build an application or website, plus a cost-effective, monthly plan. Lightsail offers several preconfigured, one-click-to-launch operating systems, development stacks, and web applications, including Linux, Windows OS, and WordPress.
Which of the following is available across all AWS Support plans?
A. Full set of AWS Trusted Advisor best practice checks
B. AWS Personal Health Dashboard
C. Third Party Software Support
D. Enhanced Technical Support with unlimited cases and unlimited contacts
B. AWS Personal Health Dashboard
Explanation:
“AWS Personal Health Dashboard”
Full set of AWS Trusted Advisor best practice checks, enhanced Technical Support with unlimited cases, and unlimited contacts and third-party Software Support are available only for Business and Enterprise Support plans.
AWS Personal Health Dashboard is available for all Support plans.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via - https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
“Full set of AWS Trusted Advisor best practice checks”
“Enhanced Technical Support with unlimited cases and unlimited contacts”
“Third-Party Software Support”
As mentioned in the explanation above, these options are available only for Business and Enterprise Support plans.
Which of the following AWS services offer LifeCycle Management for cost-optimal storage?
A. AWS Storage Gateway
B. Amazon S3
C. Amazon EBS
D. Amazon Instance Store
B. Amazon S3
Explanation:
You can manage your objects on S3 so that they are stored cost-effectively throughout their lifecycle by configuring their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects.
There are two types of actions:
Transition actions — Define when objects transition to another storage class. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class one year after creating them.
Expiration actions — Define when objects expire. Amazon S3 deletes expired objects on your behalf.
Incorrect options:
Amazon Instance Store - An Instance Store provides temporary block-level storage for your EC2 instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance storage is temporary, data is lost if instance experiences failure or is terminated. Instance Store does not offer Lifecycle Management or Infrequent Access storage class.
Amazon EBS - Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS. It does not offer Lifecycle Management or Infrequent Access storage class.
AWS Storage Gateway - AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. All data transferred between the gateway and AWS storage is encrypted using SSL (for all three types of gateways - File, Volume and Tape Gateways). Storage Gateway does not offer Lifecycle Management or Infrequent Access storage class.
A streaming media company wants to convert English language subtitles into Spanish language subtitles. As a Cloud Practitioner, which AWS service would you recommend for this use-case?
A. Amazon Rekognition
B. Amazon Translate
C. Amazon Polly
D. Amazon Transcribe
B. Amazon Translate
Explanation:
Amazon Translate
Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Amazon Translate allows you to localize content - such as websites and applications - for international users, and to easily translate large volumes of text efficiently.
Incorrect options:
Amazon Polly - You can use Amazon Polly to turn text into lifelike speech thereby allowing you to create applications that talk. Polly’s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech.
Amazon Transcribe - You can use Amazon Transcribe to add speech-to-text capability to your applications. Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, to automate closed captioning and subtitling, and to generate metadata for media assets.
Amazon Rekognition - With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as to detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.
Reference:
Which of the following entities are part of a VPC in the AWS Cloud? (Select two)
A. Subnet
B. Object
C. API Gateway
D. Storage Gateway
E. Internet Gateway
A. Subnet
E. Internet Gateway
Explanation:
Subnet
Internet Gateway
Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined.
The following are the key concepts for VPCs:
Virtual private cloud (VPC) — A virtual network dedicated to your AWS account.
Subnet — A range of IP addresses in your VPC.
Route table — A set of rules, called routes, that are used to determine where network traffic is directed.
Internet Gateway — A gateway that you attach to your VPC to enable communication between resources in your VPC and the internet.
VPC endpoint — Enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
Incorrect options:
Storage Gateway - AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. Storage Gateway is not part of VPC.
API Gateway - Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services. API Gateway is not part of a VPC.
Object - Buckets and objects are part of Amazon S3. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
Which AWS service would you use to create a logically isolated section of the AWS Cloud where you can launch AWS resources in your virtual network?
A. Subnet
B. Network Access Control List (NACL)
C. Virtual Private Cloud (VPC)
D. Virtual Private Network (VPN)
C. Virtual Private Cloud (VPC)
Explanation;
Virtual Private Cloud (VPC)
Amazon Virtual Private Cloud (Amazon VPC) is a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your IP address range, creation of subnets, and configuration of route tables and network gateways. You can easily customize the network configuration of your Amazon VPC using public and private subnets.
Incorrect options:
Virtual Private Network (VPN) - AWS Virtual Private Network (AWS VPN) lets you establish a secure and private encrypted tunnel from your on-premises network to the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. You cannot use VPN to create a logically isolated section of the AWS Cloud.
Subnet - A subnet is a range of IP addresses within your VPC. A subnet is not an AWS service, so this option is ruled out.
Network Access Control List (NACL) - A network access control list (NACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. A NACL is not an AWS service, so this option is ruled out.
A firm wants to maintain the same data on S3 between its production account and multiple test accounts. Which technique should you choose to copy data into multiple test accounts while retaining object metadata?
A. Amazon S3 Storage Classes
B. Amazon S3 Bucket Policy
C. Amazon S3 Transfer Acceleration
D. Amazon S3 Replication
D. Amazon S3 Replication
Explanation:
Amazon S3 Replication
Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can copy objects between different AWS Regions or within the same Region. You can use replication to make copies of your objects that retain all metadata, such as the original object creation time and version IDs. This capability is important if you need to ensure that your replica is identical to the source object.
Exam Alert:
Amazon S3 supports two types of replication: Cross Region Replication vs Same Region Replication. Please review the differences between SRR and CRR: via - https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html
Incorrect options:
Amazon S3 Bucket Policy - A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. Object permissions apply only to the objects that the bucket owner creates. You cannot replicate data using a bucket policy.
Amazon S3 Transfer Acceleration - Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. This facility speeds up access between end-user and S3, this is not for replicating data.
Amazon S3 Storage Classes - Amazon S3 offers a range of storage classes designed for different use cases. Each storage class has a defined set of rules to store, encrypt data at a certain price. Based on the use case, customers can choose the storage class that best suits their business requirements.
These include S3 Standard for general-purpose storage of frequently accessed data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation. You cannot replicate data using storage classes.
The QA team at a company wants a tool/service that can provide access to different mobile devices with variations in firmware and Operating System versions.
Which AWS service can address this use case?
A. AWS Elastic Beanstalk
B. AWS CodePipeline
C. AWS Mobile Farm
D. AWS Device Farm
D. AWS Device Farm
Explanation:
AWS Device Farm - AWS Device Farm is an application testing service that lets you improve the quality of your web and mobile apps by testing them across an extensive range of desktop browsers and real mobile devices; without having to provision and manage any testing infrastructure. The service enables you to run your tests concurrently on multiple desktop browsers or real devices to speed up the execution of your test suite, and generates videos and logs to help you quickly identify issues with your app.
AWS Device Farm is designed for developers, QA teams, and customer support representatives who are building, testing, and supporting mobile apps to increase the quality of their apps. Application quality is increasingly important, and also getting complex due to the number of device models, variations in firmware and OS versions, carrier and manufacturer customizations, and dependencies on remote services and other apps. AWS Device Farm accelerates the development process by executing tests on multiple devices, giving developers, QA and support professionals the ability to perform automated tests and manual tasks like reproducing customer issues, exploratory testing of new functionality, and executing manual test plans. AWS Device Farm also offers significant savings by eliminating the need for internal device labs, lab managers, and automation infrastructure development.
How it works: via - https://aws.amazon.com/device-farm/
Incorrect options:
AWS CodePipeline - AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. This enables you to rapidly and reliably deliver features and updates.
AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, etc.
You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.
AWS Mobile Farm - This is an invalid option, given only as a distractor.
Which AWS service would you choose for a data processing project that needs a schemaless database?
A. Amazon RedShift
B. Amazon Aurora
C. Amazon DynamoDB
D. Amazon RDS
C. Amazon DynamoDB
Explanation:
Amazon DynamoDB
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB is schemaless. DynamoDB can manage structured or semistructured data, including JSON documents.
Incorrect options:
Amazon RedShift - Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis. Amazon Redshift requires a well-defined schema.
Amazon Aurora - Amazon Aurora is an AWS service for relational databases. Aurora requires a well-defined schema.
Amazon RDS - Amazon RDS is an AWS service for relational databases. RDS requires a well-defined schema.
As per the Shared Responsibility Model, Security and Compliance is a shared responsibility between AWS and the customer. Which of the following security services falls under the purview of AWS under the Shared Responsibility Model?
A. AWS Web Application Firewall (WAF)
B. AWS Shield Standard
C. AWS Shield Advanced
D. Security Groups for Amazon EC2
B. AWS Shield Standard
Explanation:
AWS Shield Standard
AWS Shield is a managed service that protects against Distributed Denial of Service (DDoS) attacks for applications running on AWS. AWS Shield Standard is enabled for all AWS customers at no additional cost. AWS Shield Standard automatically protects your web applications running on AWS against the most common, frequently occurring DDoS attacks. You can get the full benefits of AWS Shield Standard by following the best practices of DDoS resiliency on AWS. As Shield Standard is automatically activated for all AWS customers with no options for any customizations, therefore AWS needs to manage the maintenance and configurations for this service. Hence this service falls under the purview of AWS.
Incorrect options:
AWS Web Application Firewall (WAF) - AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to an Amazon API Gateway API, Amazon CloudFront or an Application Load Balancer. AWS WAF also lets you control access to your content. AWS WAF has to be enabled by the customer and comes under the customer’s responsibility.
AWS Shield Advanced - For higher levels of protection against attacks, you can subscribe to AWS Shield Advanced. As an AWS Shield Advanced customer, you can contact a 24x7 DDoS response team (DRT) for assistance during a DDoS attack. You also have exclusive access to advanced, real-time metrics and reports for extensive visibility into attacks on your AWS resources. Customers need to subscribe to Shield Advanced and need to pay for this service. It falls under customer responsibility per the AWS Shared Responsibility Model.
Security Groups for Amazon EC2 - A Security Group acts as a virtual firewall for the EC2 instance to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance. Security groups are the responsibility of the customer.
AWS Marketplace facilitates which of the following use-cases? (Select two)
A. Purchase compliance documents from third party vendors
B. AWS customer can buy software that has been bundled into customized AMIs by the AWS Marketplace sellers
C. Raise request for purchasing AWS Direct Connect connection
D. Buy Amazon EC2 Standard Reserved Instances
E. Sell Software as a Service (SaaS) solutions to AWS customers
B. AWS customer can buy software that has been bundled into customized AMIs by the AWS Marketplace sellers
E. Sell Software as a Service (SaaS) solutions to AWS customers
Explanation:
Sell Software as a Service (SaaS) solutions to AWS customers
AWS customer can buy software that has been bundled into customized AMIs by the AWS Marketplace sellers
AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS. The AWS Marketplace enables qualified partners to market and sell their software to AWS Customers.
AWS Marketplace offers two ways for sellers to deliver software to customers: Amazon Machine Image (AMI) and Software as a Service (SaaS).
Amazon Machine Image (AMI): Offering an AMI is the preferred option for listing products in AWS Marketplace. Partners have the option for free or paid products. Partners can offer paid products charged by the hour or month. Bring Your Own License (BYOL) is also available and enables customers with existing software licenses to easily migrate to AWS.
Software as a Service (SaaS): If you offer a SaaS solution running on AWS (and are unable to build your product into an AMI) the SaaS listing offers our partners a way to market their software to customers.
Incorrect options:
Purchase compliance documents from third-party vendors - There is no third party vendor for providing compliance documents. AWS Artifact is your go-to, central resource for compliance-related information that matters to you. It provides on-demand access to AWS’ security and compliance reports and select online agreements.
Buy Amazon EC2 Standard Reserved Instances - Amazon EC2 Standard Reserved Instances can be bought from the Amazon EC2 console at https://console.aws.amazon.com/ec2/
Raise request for purchasing AWS Direct Connect connection - AWS Direct Connect connection can be raised from the AWS management console at https://console.aws.amazon.com/directconnect/v2/home
Which of the following is a container service of AWS?
A. AWS Fargate
B. Amazon Simple Notification Service
C. AWS Elastic Beanstalk
D. Amazon SageMaker
A. AWS Fargate
Explanation:
AWS Fargate
AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
How Fargate Works: via - https://aws.amazon.com/fargate/
Incorrect options:
AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. You simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. Beanstalk provisions servers so it is not a serverless service.
Amazon Simple Notification Service - Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.
Amazon SageMaker - Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models.
Which AWS service will you use to provision the same AWS infrastructure across multiple AWS accounts and regions?
A. AWS CloudFormation
B. AWS OpsWorks
C. AWS Systems Manager
D. AWS CodeDeploy
A. AWS CloudFormation
Explanation:
AWS CloudFormation
AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. A stack is a collection of AWS resources that you can manage as a single unit. In other words, you can create, update, or delete a collection of resources by creating, updating, or deleting stacks.
AWS CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation. Using an administrator account, you define and manage an AWS CloudFormation template, and use the template as the basis for provisioning stacks into selected target accounts across specified regions.
How CloudFormation Works: via - https://aws.amazon.com/cloudformation/
Incorrect options:
AWS CodeDeploy - AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications. You cannot use this service to provision AWS infrastructure.
AWS OpsWorks - AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across your Amazon EC2 instances or on-premises compute environments. You cannot use OpsWorks for running commands or managing patches on servers. You cannot use this service to provision AWS infrastructure.
AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources. You cannot use this service to provision AWS infrastructure.
Which AWS service will help you install application code automatically to an Amazon EC2 instance?
A. AWS CloudFormation
B. AWS CodeDeploy
C. AWS Elastic Beanstalk
D. AWS CodeBuild
B. AWS CodeDeploy
Explanation:
AWS CodeDeploy
AWS CodeDeploy is a service that automates application deployments to a variety of compute services including Amazon EC2, AWS Fargate, AWS Lambda, and on-premises instances. CodeDeploy fully automates your application deployments eliminating the need for manual operations. CodeDeploy protects your application from downtime during deployments through rolling updates and deployment health tracking.
Incorrect options:
AWS Elastic Beanstalk - AWS Elastic Beanstalk is the fastest and simplest way to get web applications up and running on AWS. Developers simply upload their application code and the service automatically handles all the details such as resource provisioning, load balancing, auto-scaling, and monitoring. Elastic Beanstalk is an end-to-end application platform, unlike CodeDeploy, which is targeted at code deployment automation for any environment (Development, Testing, Production). It cannot be used to automatically deploy code to an Amazon EC2 instance.
AWS CloudFormation - AWS CloudFormation provides a common language for you to model and provision AWS and third-party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. It cannot be used to automatically deploy code to an Amazon EC2 instance.
AWS CodeBuild - AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. It cannot be used to automatically deploy code to an Amazon EC2 instance.
According to the AWS Shared Responsibility Model, which of the following are responsibilities of the customer for IAM? (Select two)
A. Manage global network security infrastructure
B. Compliance validation for the underlying software infrastructure
C. Enable MFA on all accounts
D. Analyze user access patterns and review IAM permissions
E. Configuration and vulnerability analysis for the underlying software infrastructure
C. Enable MFA on all accounts
D. Analyze user access patterns and review IAM permissions
Explanation:
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
Enable MFA on all accounts
Analyze user access patterns and review IAM permissions
Under the AWS Shared Responsibility Model, customers are responsible for enabling MFA on all accounts, analyzing access patterns and reviewing permissions.
Shared Responsibility Model Overview: via - https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect options:
Manage global network security infrastructure
Configuration and vulnerability analysis for the underlying software infrastructure
Compliance validation for the underlying software infrastructure
According to the AWS Shared Responsibility Model, AWS is responsible for “Security of the Cloud”. This includes protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Therefore these three options fall under the responsibility of AWS.
Which AWS Route 53 routing policy would you use to route traffic to a single resource such as a web server for your website?
A. Failover routing policy
B. Latency routing policy
C. Simply routing policy
D. Weighted Routing Policy
C. Simply routing policy
Explanation:
Simple routing policy
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other.
Simple routing lets you configure standard DNS records, with no special Route 53 routing such as weighted or latency. With simple routing, you typically route traffic to a single resource, for example, to a web server for your website.
Route 53 Routing Policy Overview: via - https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
Incorrect options:
Failover routing policy - This routing policy is used when you want to configure active-passive failover.
Weighted routing policy - This routing policy is used to route traffic to multiple resources in proportions that you specify.
Latency routing policy - This routing policy is used when you have resources in multiple AWS Regions and you want to route traffic to the region that provides the best latency.
The DevOps team at an IT company wants to centrally manage its servers on AWS Cloud as well as on-premise data center so that it can collect software inventory, run commands, configure and patch servers at scale. As a Cloud Practitioner, which AWS service would you recommend for this use-case?
A. Systems Manager
B. OpsWorks
C. Config
D. CloudFormation
A. Systems Manager
Explanation:
Systems Manager
AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks such as collecting software inventory, running commands, managing patches, and configuring servers across AWS Cloud as well as on-premises infrastructure.
AWS Systems Manager offers utilities for running commands, patch-management and configuration compliance: via - https://aws.amazon.com/systems-manager/faq/
via - https://aws.amazon.com/systems-manager/
Incorrect options:
OpsWorks - AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across your Amazon EC2 instances or on-premises compute environments. You cannot use OpsWorks for collecting software inventory and viewing operational data from multiple AWS services.
CloudFormation - AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. Think infrastructure as code; think CloudFormation. You cannot use CloudFormation for running commands or managing patches on servers.
Config - AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. You cannot use Config for running commands or managing patches on servers.
Amazon EC2 Spot instances are a best-fit for which of the following scenarios?
A. To run batch processes for critical workloads
B. To run any containerized workload with Elastic Container Service (ECS) that can be interrupted
C. To install cost-effective RDS database
D. To run scheduled jobs (jobs that run at the same time every day)
B. To run any containerized workload with Elastic Container Service (ECS) that can be interrupted
Explanation:
To run any containerized workload with Elastic Container Service (ECS) that can be interrupted
Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices.
Containers are stateless, fault-tolerant and a great fit for Spot Instances. Spot Instances can be used with Elastic Container Service (ECS) or Elastic Container Service for Kubernetes (EKS) to run any containerized workload, from distributed parallel test systems to applications that map millions of miles a day. Spot instances provide the flexibility of ad-hoc provisioning for multiple instance types in different Availability Zones, with an option to hibernate, stop or terminate instances when EC2 needs the capacity back and Spot Instances are reclaimed.
via - https://aws.amazon.com/ec2/spot/containers-for-less/
Incorrect options:
To install cost-effective RDS database - Spot instance capacity allocated to you can be taken back anytime without notice if AWS needs them. Hence, Spot instances can only be used as additional compute capacity and not for hosting or installing any software or database.
To run batch processes for critical workloads - Business-critical workloads cannot be run on Spot instances.
To run scheduled jobs (jobs that run at the same time every day) - There is no guarantee that a Spot instance will be available at a specific time every day. For a scheduled requirement, Scheduled Reserved instances should be used.
Which of the following types are free under the Amazon S3 pricing model? (Select two)
A. Data transferred out to an Amazon Elastic Compute Cloud (Amazon EC2) instance in any AWS Region
B. Data transferred out to an Amazon Elastic Compute Cloud (Amazon EC2) instance, when the instance is in the same AWS Region as the S3 bucket
C. Data transferred in from the internet
D. Data storage fee for objects stored in S3 Glacier
E. Data storage fee for objects stored in S3 Standard
B. Data transferred out to an Amazon Elastic Compute Cloud (Amazon EC2) instance, when the instance is in the same AWS Region as the S3 bucket
C. Data transferred in from the internet
Explanation:
Data transferred in from the internet
Data transferred out to an Amazon Elastic Compute Cloud (Amazon EC2) instance, when the instance is in the same AWS Region as the S3 bucket
There are four cost components to consider for S3 pricing – storage pricing; request and data retrieval pricing; data transfer and transfer acceleration pricing; and data management features pricing. Under “Data Transfer”, You pay for all bandwidth into and out of Amazon S3, except for the following: (1) Data transferred in from the internet, (2) Data transferred out to an Amazon Elastic Compute Cloud (Amazon EC2) instance, when the instance is in the same AWS Region as the S3 bucket, (3) Data transferred out to Amazon CloudFront (CloudFront).
Incorrect options:
Data transferred out to an Amazon Elastic Compute Cloud (Amazon EC2) instance in any AWS Region - This is incorrect. Data transfer charges apply when the instance is not in the same AWS Region as the S3 bucket.
Data storage fee for objects stored in S3 Standard - S3 Standard charges a storage fee for objects.
Data storage fee for objects stored in S3 Glacier - S3 Glacier charges a storage fee
Which AWS service can be used to host a static website with the LEAST effort?
A. AWS Storage Gateway
B. Amazon Simple Storage Service (Amazon S3)
C. Amazon S3 Glacier
D. Amazon Elastic File System (Amazon EFS)
B. Amazon Simple Storage Service (Amazon S3)
Explanation:
Amazon Simple Storage Service (Amazon S3)
Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance. Amazon S3’s flat, non-hierarchical structure and various management features are helping customers of all sizes and industries organize their data in ways that are valuable to their businesses and teams.
To host a static website on Amazon S3, you configure an Amazon S3 bucket for website hosting and then upload your website content to the bucket. When you configure a bucket as a static website, you must enable website hosting, set permissions, and create and add an index document.
Hosting a static website on Amazon S3: via - https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
Incorrect options:
AWS Storage Gateway - AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. It helps on-premises applications to access data on AWS Cloud. It cannot be used to host a website.
Amazon Elastic File System (Amazon EFS) - Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. EFS storage option cannot directly be used to host a website, EFS needs to be mounted on Amazon EC2 to work as a static website.
Amazon S3 Glacier - Amazon S3 Glacier is a secure, durable, and extremely low-cost Amazon S3 cloud storage class for data archiving and long-term backup. As you see, this cannot be used for hosting a website.