AWS Certified Cloud Practitioner Practice Test 2 (Bosos) Flashcards
What is the best type of instance purchasing option to choose if you will run an EC2 instance for 3 months to perform a job that is uninterruptible? A.Spot B.Dedicated C.On-Demand D.Reserved
C.On-Demand
Explanation:
With On-Demand instances you only pay for EC2 instances you use. The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs.
This type of instance lets you pay for compute capacity by the hour or second (minimum of 60 seconds) with no long-term commitments. This frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs.
There is a limit to the number of running On-Demand Instances per AWS account per Region. You can determine whether your On-Demand Instance limits are count-based or vCPU-based. With vCPU-based instance limits, your limits are managed in terms of the number of virtual central processing units (vCPUs) that your running On-Demand Instances are using, regardless of the instance type. You can use the vCPU limits calculator to determine the number of vCPUs that you require for your application needs:
On-Demand instances are the best instance type to use when you need instances for short periods of time and for uninterruptible workloads since they are the cheapest option for its span of time.
Reserved instance is incorrect because although it does offer discounts on hourly cost, you still need to commit at least a whole year’s worth of instance cost to fully maximize the discounts. Since your workload will run for only 3 months, this option is not suitable.
Spot instance is incorrect because this can be terminated by Amazon EC2 based on the long-term supply of and demand for Spot Instances. Hence, this is not recommended for uninterruptible workloads.
Dedicated Instance is incorrect because this is just a type of Amazon EC2 instance that runs in a VPC on hardware that’s dedicated to a single customer. This option is not relevant to the question so this is incorrect.
In the AWS Shared Responsibility Model, whose responsibility is it to patch the host operating system of an Amazon EC2 instance? A.Customer B.AWS C.Neither AWS nor the customer D.Both AWS and the customer
B.AWS
Explanation:
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
The customer assumes responsibility and management of the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations.
The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment. As shown in the chart below, this differentiation of responsibility is commonly referred to as Security OF the Cloud versus Security IN the Cloud.
This customer/AWS shared responsibility model also extends to IT controls. Just as the responsibility to operate the IT environment is shared between AWS and its customers, so is the management, operation and verification of IT controls shared. AWS can help relieve customer burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by the customer. As every customer is deployed differently in AWS, customers can take advantage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment.
Customers can then use the AWS control and compliance documentation available to them to perform their control evaluation and verification procedures as required. Below are examples of controls that are managed by AWS, AWS Customers and/or both.
Inherited Controls: Controls which a customer fully inherits from AWS.
- Physical and Environmental controls
Shared Controls: Controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services.
Examples include:
- Patch Management: AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.
- Configuration Management: AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.
- Awareness & Training: AWS trains AWS employees, but a customer must train their own employees.
Customer Specific: Controls which are solely the responsibility of the customer based on the application they are deploying within AWS services.
Examples include:
- Service and Communications Protection or Zone Security which may require a customer to route or zone data within specific security environments.
The host operating system, which is managed by AWS, is the hypervisor that creates several guest operating systems that can be managed by different customers. Amazon EC2 uses a technology commonly known as virtualization to run multiple operating systems on a single physical machine. Virtualization ensures that each guest operating system receives its fair share of CPU time, memory, and I/O bandwidth to the local disk and to the network using a host operating system, sometimes known as a hypervisor. The hypervisor also isolates the guest operating systems from each other so that one guest cannot modify or otherwise interfere with another one on the same machine.
Hence, the correct answer is AWS.
Customer is incorrect because their responsibility is to patch the guest operating system of their EC2 instance and not the host operating system.
Both AWS and the customer is incorrect because patching the host operating system of the Amazon EC2 instance is the responsibility of AWS. Take note that if you are using a fully-managed service like Amazon DynamoDB or Redshift, AWS will also be responsible for the underlying guest operating system.
Neither AWS nor the customer is incorrect as this task falls under the responsibilities of AWS.
In AWS, which of the following is a design principle that you should implement when designing your cloud architecture?
A.Always use large serves to anticipate increase usage
B.Use Multiple Availability Zones
C.Utilize free or open-source software
D.Tightly couple your compnents
B.Use Multiple Availability Zones
Explanation:
There are various best practices that you can follow which can help you build an application in the cloud. The notable ones are:
- Design for failure
- Decouple your components
- Implement elasticity
- Think parallel
In Design for failure, it encourages you to be a pessimist when designing architectures in the cloud; assume things will fail. In other words, always design, implement, and deploy for automated recovery from failure.
In particular, assume that your hardware will fail. Assume that outages will occur. Assume that some disaster will strike your application. Assume that you will be slammed with more than the expected number of requests per second someday. Assume that with time your application software will fail too. By being a pessimist, you end up thinking about recovery strategies during design time, which helps in designing an overall system better.
Designing with an assumption that underlying hardware will fail, will prepare you for the future when it actually fails. This design principle will help you design operations-friendly applications. If you can extend this principle to pro-actively measure and balance load dynamically, you might be able to deal with variance in network and disk performance that exists due to the multi-tenant nature of the cloud.
AWS specific tactics for implementing this best practice are as follows:
- Failover gracefully using Elastic IPs: Elastic IP is a static IP that is dynamically re-mappable. You can quickly remap and failover to another set of servers so that your traffic is routed to the new servers. It works great when you want to upgrade from old to new versions or in case of hardware failures
- Utilize multiple Availability Zones: Availability Zones are conceptually like logical datacenters. By deploying your architecture to multiple availability zones, you can ensure high availability. Utilize Amazon RDS Multi-AZ deployment functionality to automatically replicate database updates across multiple Availability Zones.
- Maintain an Amazon Machine Image so that you can restore and clone environments very easily in a different Availability Zone; Maintain multiple Database slaves across Availability Zones and setup hot replication.
- Utilize Amazon CloudWatch (or various real-time open source monitoring tools) to get more visibility and take appropriate actions in case of hardware failure or performance degradation. Setup an Auto Scaling group to maintain a fixed fleet size so that it replaces unhealthy Amazon EC2 instances by new ones.
- Utilize Amazon EBS and set up cron jobs so that incremental snapshots are automatically uploaded to Amazon S3 and data is persisted independent of your instances.
- Utilize Amazon RDS and set the retention period for backups, so that it can perform automated backups.
By focusing on concepts and best practices - like designing for failure, decoupling the application components, understanding and implementing elasticity, combining it with parallelization, and integrating security in every aspect of the application architecture - cloud architects can understand the design considerations necessary for building highly scalable cloud applications.
Hence, the correct answer is: Use multiple Availability Zones.
The option that says: Tightly couple your components is incorrect because this is exactly the opposite of the “Decouple your components” cloud design principle.
The option that says: Always use large servers to anticipate increase usage is incorrect because this action doesn’t follow the concept of implementing elasticity to your cloud architecture. In this case, it is better to use Auto Scaling to automatically increase or decrease the number of your servers based on the application demand.
The option that says: Utilize free or open-source software is incorrect because this is not considered as one of the cloud design principles nor a best practice.
Which of the following tasks fall under the sole responsibility of AWS based on the shared responsibility model? A.Physical and environmental controls B.Patch Management C.Applying Amazon S3 bucket policies D.Imokementing IAM policies
A.Physical and environmental controls
Explanation:
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
The customer assumes responsibility and management of the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations.
The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment. As shown in the chart below, this differentiation of responsibility is commonly referred to as Security OF the Cloud versus Security IN the Cloud.
This customer/AWS shared responsibility model also extends to IT controls. Just as the responsibility to operate the IT environment is shared between AWS and its customers, so is the management, operation and verification of IT controls shared. AWS can help relieve customer burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by the customer. As every customer is deployed differently in AWS, customers can take advantage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment.
Customers can then use the AWS control and compliance documentation available to them to perform their control evaluation and verification procedures as required. Below are examples of controls that are managed by AWS, AWS Customers and/or both.
Inherited Controls: Controls which a customer fully inherits from AWS.
- Physical and Environmental controls
Shared Controls: Controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services.
Examples include:
- Patch Management: AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.
- Configuration Management: AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.
- Awareness & Training: AWS trains AWS employees, but a customer must train their own employees.
Customer Specific: Controls which are solely the responsibility of the customer based on the application they are deploying within AWS services.
Examples include:
- Service and Communications Protection or Zone Security which may require a customer to route or zone data within specific security environments.
Hence, the correct answer is Physical and environmental controls.
Both Implementing IAM policies and Applying Amazon S3 bucket policies are incorrect because these are the responsibilities of the customer and not AWS.
Patch Management is incorrect because this is actually a shared control between AWS and the customer.
\_\_\_\_\_\_\_\_\_\_ lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. A.Amazon LightSail B.Virtual Private Gateway C.Amazon WorkSpaces D.Amazon VPC
D.Amazon VPC
Explanation:
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications.
You can easily customize the network configuration for your Amazon VPC. For example, you can create a public-facing subnet for your web servers that has access to the Internet, and place your backend systems such as databases or application servers in a private-facing subnet with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet.
Hence, the correct answer is Amazon VPC.
Amazon LightSail is incorrect because this service is just a virtual private server (VPS) solution which provides developers with compute, storage, and networking capacity and capabilities to deploy and manage websites and web applications in the cloud.
Virtual Private Gateway is incorrect because this is primarily used for connecting your on-premises network to your VPC.
Amazon WorkSpaces is incorrect because this is just a Desktop-as-a-Service (DaaS) solution in AWS which allows you to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe.
Which of the following provides software solutions that are either hosted on or integrated with the AWS platform which may include Independent Software Vendors (ISVs), SaaS, PaaS, developer tools, management, and security vendors?
A.AWS Partner Network Consulting Partners
B.AWS Partner Network Technology Partners
C.Concierge Support
D.Technical Account management
B.AWS Partner Network Technology Partners
Explanation
The AWS Partner Network (APN) is focused on helping partners build successful AWS-based businesses to drive superb customer experiences. This is accomplished by developing a global ecosystem of Partners with specialties unique to each customer’s needs.
There are two types of APN Partners:
- APN Consulting Partners
- APN Technology Partners
APN Consulting Partners are professional services firms that help customers of all sizes design, architect, migrate, or build new applications on AWS. Consulting Partners include System Integrators (SIs), Strategic Consultancies, Resellers, Digital Agencies, Managed Service Providers (MSPs), and Value-Added Resellers (VARs).
APN Technology Partners provide software solutions that are either hosted on, or integrated with, the AWS platform. Technology Partners include Independent Software Vendors (ISVs), SaaS, PaaS, developer tools, management and security vendors.
Hence, the correct answer in this scenario is APN Technology Partners.
APN Consulting Partners is incorrect because this program only helps customers to design, architect, migrate, or build new applications on AWS. You have to use APN Technology Partners instead.
Concierge Support is incorrect because this is a team composed of AWS billing and account experts that specialize in working with enterprise accounts. They will quickly and efficiently assist you with your billing and account inquiries, and work with you to implement billing and account best practices so that you can focus on running your business.
Technical Account Management is incorrect because this is just a part of AWS Enterprise Support which provides advocacy and guidance to help plan and build solutions using best practices, coordinate access to subject matter experts and product teams, and proactively keep your AWS environment operationally healthy.
Which of the following policies grant the necessary permissions required to access your Amazon S3 resources? (Select TWO.) A.Object policies B.Routing policies C.User policies D.Bucket policies E.Network access control policies
C.User policies
D.Bucket policies
Explanation:
When granting permissions, you decide who is getting them, which Amazon S3 resources they are getting permissions for, and specific actions you want to allow on those resources. Buckets and objects are Amazon S3 resources. By default, only the resource owner can access these resources. The resource owner refers to the AWS account that creates the resource
Bucket policy and user policy are two of the access policy options available for you to grant permission to your Amazon S3 resources. Both use JSON-based access policy language. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. User policies are policies that allow an IAM User access to one of your buckets.
Hence, the correct answers are bucket policy and user policy.
All other options (routing policies, network access control policies, and object policies) are incorrect as these are not the correct features that will grant permissions to your Amazon S3 bucket.
Which of the following are the pillars of the AWS Well-Architected Framework? (Select TWO.) A.Performance Efficiency B.Agility C.High Availability D.Operational Excellence E.Scalability
A.Performance Efficiency
D.Operational Excellence
Explanation:
The Well-Architected Framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. This is based on five pillars namely:
- Operational Excellence
- Security
- Reliability
- Performance Efficiency
- Cost Optimization
This Framework provides a consistent approach for customers and partners to evaluate architectures, and implement designs that will scale over time.
The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using this Framework, you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement. The process for reviewing an architecture is a constructive conversation about architectural decisions and is not an audit mechanism. Having well-architected systems greatly increases the likelihood of business success.
AWS Solutions Architects have years of experience architecting solutions across a wide variety of business verticals and use cases. AWS has helped design and review thousands of customers’ architectures on AWS. From this experience, AWS has identified best practices and core strategies for architecting systems in the cloud that you can also implement.
You can also use the AWS Well-Architected Tool; it helps you review the state of your workloads and compares them to the latest AWS architectural best practices. The tool is based on the AWS Well-Architected Framework, developed to help cloud architects build secure, high-performing, resilient, and efficient application infrastructure.
Hence, the correct answers are Operational Excellence and Performance Efficiency.
High Availability, Scalability and Agility are all incorrect because these are not part of the 5 AWS Well-Architected Framework pillars.
Which of the following will allow you to create a data warehouse in AWS for your business intelligence needs? A,Amazon RDS B.Amazon DynamoDB C.Amazon RedShift D.Amazon S3
C.Amazon RedShift
Explanation:
Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale out to petabytes of data for $1,000 per terabyte per year, less than a tenth the cost of traditional solutions.
Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to directly run SQL queries against exabytes of unstructured data in Amazon S3. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, SequenceFile, TextFile, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data being retrieved, so queries against Amazon S3 run fast, regardless of data set size.
Traditional data warehouses require significant time and resource to administer, especially for large datasets. In addition, the financial cost associated with building, maintaining, and growing self-managed, on-premise data warehouses is very high. As your data grows, you have to constantly trade-off what data to load into your data warehouse and what data to archive in storage so you can manage costs, keep ETL complexity low, and deliver good performance. Amazon Redshift not only significantly lowers the cost and operational overhead of a data warehouse, but with Redshift Spectrum, also makes it easy to analyze large amounts of data in its native format without requiring you to load the data.
Hence, the correct answer is Amazon Redshift.
Amazon Relational Database Service (Amazon RDS) is incorrect since this is a relational (SQL) database in the cloud, not a data warehouse.
Amazon DynamoDB is incorrect since this service is a non-relational (noSQL) database in the cloud, not a data warehouse.
Amazon S3 is incorrect since this service is a durable cloud storage for objects and files, and not a data warehouse.
A company plans to migrate their on-premises MySQL database to Amazon RDS. Which AWS service should they use for this task?
A.AWS Server Migration Service
B.AWS Direct Connect
C.AWS Schema Conversion Tool (AWS SCT)
D.AWS Database Migration Service (AWS DMS)
D.AWS Database Migration Service (AWS DMS)
Explanation:
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
AWS Database Migration Service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft SQL Server to Amazon Aurora. With AWS Database Migration Service, you can continuously replicate your data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3.
The AWS Schema Conversion Tool makes heterogeneous database migrations predictable by automatically converting the source database schema and a majority of the database code objects, including views, stored procedures, and functions, to a format compatible with the target database. Any objects that cannot be automatically converted are clearly marked so that they can be manually converted to complete the migration. SCT can also scan your application source code for embedded SQL statements and convert them as part of a database schema conversion project.
During this process, SCT performs cloud-native code optimization by converting legacy Oracle and SQL Server functions to their equivalent AWS service thus helping you modernize the applications at the same time of database migration. Once schema conversion is complete, SCT can help migrate data from a range of data warehouses to Amazon Redshift using built-in data migration agents. For example, it can convert PostgreSQL to MySQL or an Oracle Data Warehouse to Amazon Redshift
Hence, the correct answer is AWS Database Migration Service (AWS DMS).
AWS Schema Conversion Tool (AWS SCT) is incorrect because this is primarily used to convert your existing database schema from one database engine to another. The scenario didn’t mention anything about migrating the MySQL database to another database type. Since the task is to just migrate their on-premises MySQL database to Amazon RDS, you simply need to use the AWS Database Migration Service (AWS DMS).
AWS Server Migration Service is incorrect because this is just an agentless service that makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. This is not the appropriate service to use in migrating your on-premises database.
AWS Direct Connect is incorrect because this is just a cloud service solution that makes it easier for you to establish a dedicated network connection from your premises to AWS.
Which of the following best describes the concept of the loose coupling design principle?
A.Increase the number of resources by adding more hard drives to a storage array or adding more servers
B.Increase the specifications of an individual resource by upgrading a server with a larger hard drive or a faster CPU
C.A change or a failure in one component must be cascaded to other components
D.A change or a failure in one component should not cascade to other components
D.A change or a failure in one component should not cascade to other components
Explanation:
The AWS Cloud includes many design patterns and architectural options that you can apply to a wide variety of use cases. Some key design principles of the AWS Cloud include scalability, disposable resources, automation, loose coupling managed services instead of servers, and flexible data storage options.
As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies — a change or a failure in one component should not cascade to other components.
A way to reduce interdependencies in a system is to allow the various components to interact with each other only through specific, technology-agnostic interfaces, such as RESTful APIs. In that way, technical implementation detail is hidden so that teams can modify the underlying implementation without affecting other components. As long as those interfaces maintain backward compatibility, deployments of different components are decoupled. This granular design pattern is commonly referred to as a microservices architecture.
Hence, the correct answer is: A change or a failure in one component should not cascade to other components.
The option that says: Increase the specifications of an individual resource by upgrading a server with a larger hard drive or a faster CPU is incorrect because this refers to Vertical Scaling.
The option that says: A change or a failure in one component must be cascade to other components is incorrect because it should be the other way around. IT systems should be designed in a way that reduces interdependencies in which a change or a failure in one component should not cascade to other components.
The option that says: Increase the number of resources by adding more hard drives to a storage array or adding more servers is incorrect because this refers to Horizontal Scaling.
Which of the following services allows you to easily migrate petabyte-scale data to AWS? A.AWS Transit Gateway B.AWS Data Pipeline C.AWS Snowball D.Amazon SQS
C.AWS Snowball
Explanation:
AWS Snowball is a petabyte-scale data transport solution that uses devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns.
With Snowball, you don’t need to write any code or purchase any hardware to transfer your data. Simply create a job in the AWS Management Console (“Console”) and a Snowball device will be automatically shipped to you. Once it arrives, attach the device to your local network, download and run the Snowball Client (“Client”) to establish a connection, and then use the Client to select the file directories that you want to transfer to the device. The Client will then encrypt and transfer the files to the device at high speed. Once the transfer is complete and the device is ready to be returned, the E Ink shipping label will automatically update and you can track the job status via Amazon Simple Notification Service (SNS), text messages, or directly in the Console.
Hence, the correct answer is AWS Snowball.
Amazon Data Pipeline is incorrect since this service does not offer an easy solution for transporting petabyte-scale data from data centers to AWS.
Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. This service is not meant for petabyte-scale data migration.
AWS Transit Gateway is incorrect since this is a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway.
The IT Security team of your company needs to conduct a vulnerability analysis on your application servers to ensure that the EC2 instances comply with the annual security IT audit. You need to set up an automated security assessment service to improve the security and compliance of your applications. The solution should automatically assess applications for exposure, vulnerabilities, and deviations from the AWS best practices.
Which of the following options would you implement to satisfy this requirement? A.AWS Inspector B.AWS Snowball C.AWS WAF D.Amazon CloudFront
A.AWS Inspector
Explanation:
Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API.
Amazon Inspector security assessments help you check for unintended network accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2 instances. Amazon Inspector assessments are offered to you as pre-defined rules packages mapped to common security best practices and vulnerability definitions. Examples of built-in rules include checking for access to your EC2 instances from the internet, remote root login being enabled, or vulnerable software versions installed. These rules are regularly updated by AWS security researchers.
Hence, the correct answer is: Amazon Inspector.
AWS WAF is incorrect because this is a firewall service to safeguard your VPC against DDoS, SQL Injection, and many other threats.
AWS Snowball is incorrect because Snowball is mainly used to transfer data from your on-premises network to AWS.
Amazon CloudFront is incorrect because CloudFront is used as a content distribution service.
In compliance with the Sarbanes-Oxley Act (SOX) federal law, a US-based company is required to provide SOC 1 and SOC 2 reports of their cloud resources. Where are these AWS compliance documents located? A.AWS Certificate Manager B.AWS Artifact C.AWS Organizations D.AWS GovCloud
B.AWS Artifact
Explanation:
The Service Organization Controls (SOC) Reports are used to evaluate the effectiveness of AWS controls that might affect your internal controls over financial reporting (ICOFR). The audit is performed according to the SSAE 18 and ISAE 3402 standards. Many AWS customers use this report as an integral part of their Sarbanes-Oxley (SOX) efforts.
AWS Artifact is your go-to, central resource for compliance-related information that matters to you. It provides on-demand access to AWS’ security and compliance reports and select online agreements. Reports available in AWS Artifact include our Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls. Agreements available in AWS Artifact include the Business Associate Addendum (BAA) and the Nondisclosure Agreement (NDA).
All AWS Accounts have access to AWS Artifact. Root users and IAM users with admin permissions can download all audit artifacts available to their account by agreeing to the associated terms and conditions. You will need to grant IAM users with non-admin permissions access to AWS Artifact using IAM permissions. This allows you to grant a user access to AWS Artifact, while restricting access to other services and resources within your AWS Account.
Hence, the correct answer in this scenario is AWS Artifact.
AWS GovCloud is incorrect as this is basically just an isolated AWS Region and not a compliance document repository like AWS Artifact, which is designed to allow U.S. government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements.
AWS Organizations is incorrect because this just helps you centrally govern your environment as you grow and scale your workloads in AWS.
AWS Certificate Manager is incorrect because this is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. This service does not store certifications or compliance-related documents.
Which AWS service is commonly used for streaming data in real-time? A,Amazon EMR B.Amazon Data Pipeline C.Amazon Kinesis D.Amazon Elastisearch
C.Amazon Kinesis
Explanation:
With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the processing can begin.
Amazon Kinesis can handle any amount of streaming data and process data from hundreds of thousands of sources with very low latencies. It enables you to ingest, buffer, and process streaming data in real-time, so you can derive insights in seconds or minutes instead of hours or days.
Hence, the correct answer is Amazon Kinesis.
Amazon Elasticsearch is incorrect because it’s just a fully managed service that allows you to deploy, secure, and operate Elasticsearch at scale with zero down time.
Amazon EMR is incorrect since this is just a big data service that gives analytical teams the engines and elasticity to run Petabyte-scale analysis for a fraction of the cost of traditional on-premise clusters, using open source Apache tools.
Amazon Data Pipeline is incorrect because this is simply a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals.
Which service allows you to send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available? A.Amazon SES B.Amazon SWF C.Amazon SQS D.Amazon Route 53
C.Amazon SQS
Explanation:
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.
You can get started with SQS in minutes using the AWS console, Command Line Interface or SDK of your choice, and three simple commands. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
Hence, the correct answer is Amazon SQS.
Amazon SWF is incorrect since this is a service that is meant for automation. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud.
Amazon Route 53 is incorrect since this is a DNS web service in AWS.
Amazon SES is incorrect since this is a cloud-based email sending service.
Which of the following is the most cost-effective instance purchasing option for hosting an application which will run non-interruptible workloads for a period of three years?
A.Amazon EC2 Scheduled Reserved Instances
B.Amazon EC2 Standard Reserved Instances
C.Amazon EC2 On-Demand Instances
D.Amazon EC2 Spot Instances
B.Amazon EC2 Standard Reserved Instances
Explanation:
Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.
Standard Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing and can be purchased for a 1-year or 3-year term. The average discount off On-Demand instances varies based on your term and chosen payment options (up to 40% for 1-year and 60% for a 3-year term). Customers have the flexibility to change the Availability Zone, the instance size, and networking type of their Standard Reserved Instances.
Convertible Reserved Instances provide you with a significant discount (up to 54%) compared to On-Demand Instances and can be purchased for a 1-year or 3-year term. Purchase Convertible Reserved Instances if you need additional flexibility, such as the ability to use different instance families, operating systems, or tenancies over the Reserved Instance term.
Hence, the correct answer is Amazon EC2 Standard Reserved Instances.
The Amazon EC2 Spot Instances option is incorrect because although this is the most cost-effective type, this instance can be interrupted by Amazon EC2 for capacity requirements making it not suitable for non-interruptible workloads.
The Amazon EC2 Scheduled Reserved Instances option is incorrect because this will just enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration for a one-year term only and not for a three-year term. This is more suitable for non-continuous workloads which runs on a specific time of the day only.
The Amazon EC2 On-Demand Instances option is incorrect because although it is suitable to run non-interruptible workloads for a period of three years, it entails a higher running cost compared to Reserved or Spot instances. In fact, this is actually the most expensive type of EC2 instance and not the cheapest one.
Which of the following is not required when launching an EBS-backed EC2 instance? A.EBS Root Volume B.VPC and subnet specifications C.Security group D.Elastic IP address
D.Elastic IP address
Explanation:
Instances that use Amazon EBS for the root device automatically have an Amazon EBS volume attached. When you launch an Amazon EBS-backed instance, we create an Amazon EBS volume for each Amazon EBS snapshot referenced by the AMI you use. You can optionally use other Amazon EBS volumes or instance store volumes, depending on the instance type.
An Amazon EBS-backed instance can be stopped and later restarted without affecting data stored in the attached volumes. There are various instance/volume-related tasks you can do when an Amazon EBS-backed instance is in a stopped state. For example, you can modify the properties of the instance, change its size, or update the kernel it is using, or you can attach your root volume to a different running instance for debugging or any other purpose.
When launching an EC2 instance, you are not required to provide an Elastic IP address. If the instance is a public web server, then you can optionally choose to have an AWS-provided public IP address assigned to it. This IP address will depend on the setting of the subnet where you launched the instance.
Hence, Elastic IP address is the correct answer.
Security groups, EBS root volumes, and VPC and subnet values are all required when launching an EC2 instance.
Which of the following is one of the benefits of migrating your systems from an on-premises data center to AWS Cloud?
A.Eliminates the need for the customer ti implement client-side or service-side encryption for their data
B.Completely eliminates the administrative overhead of patching the guest operating system of their EC2 instances
C.Enables the customer to eliminate high IT infrastructure costs since cloud computing is absolutely free
D.Enables the customer to focus on business activities rathert than on heavy lifting of racking, stacking and powering servers
D.Enables the customer to focus on business activities rathert than on heavy lifting of racking, stacking and powering servers
Explanation:
Cloud computing is the on-demand delivery of compute power, database, storage, applications, and other IT resources via the internet with pay-as-you-go pricing.
Whether you are using it to run applications that share photos to millions of mobile users or to support business critical operations, a cloud services platform provides rapid access to flexible and low cost IT resources. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Instead, you can provision exactly the right type and size of computing resources you need to power your newest idea or operate your IT department. You can access as many resources as you need, almost instantly, and only pay for what you use.
There are six advantages of using Cloud Computing:
- Trade capital expense for variable expense
– Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume.
- Benefit from massive economies of scale
– By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.
- Stop guessing capacity
– Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice.
- Increase speed and agility
– In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.
- Stop spending money running and maintaining data centers
– Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers.
- Go global in minutes
– Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost.
Hence, the correct answer is: Enables the customer to focus on business activities rather than on the heavy lifting of racking, stacking, and powering servers.
The option that says: Enables the customer to eliminate high IT infrastructure costs since cloud computing is absolutely free is incorrect because although it is true that cloud computing can lessen or eliminate exorbitant IT infrastructure costs, the customers will still be charged based on their usage in AWS. You can opt to use the AWS Free Tier (which has limited capabilities) for testing but this is not considered a benefit of using AWS over your traditional data center.
The option that says: Completely eliminate the administrative overhead of patching the guest operating system of their EC2 instances is incorrect because based on the Shared Responsibility Model, the customer is the one responsible for patching the guest OS while AWS is responsible for the underlying host OS of the EC2 instance.
The option that says: Eliminates the need for the customer to implement client-side or service-side encryption for their data is incorrect because based on the Shared Responsibility Model, the customer is responsible for applying the encryption of their data.
Which of the following is the most cost-effective option when you purchase either a Standard or Convertible Reserved Instance for a 1-year term? A.No Upfront B.All Upfront C.Partial Upfront D.Deferred
B.All Upfront
Explanation:
Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.
Standard Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing and can be purchased for a 1-year or 3-year term. The average discount off On-Demand instances varies based on your term and chosen payment options (up to 40% for 1-year and 60% for a 3-year term). Customers have the flexibility to change the Availability Zone, the instance size, and networking type of their Standard Reserved Instances.
Convertible Reserved Instances provide you with a significant discount (up to 54%) compared to On-Demand Instances and can be purchased for a 1-year or 3-year term. Purchase Convertible Reserved Instances if you need additional flexibility, such as the ability to use different instance families, operating systems, or tenancies over the Reserved Instance term.
You can choose between three payment options when you purchase a Standard or Convertible Reserved Instance:
All Upfront option: You pay for the entire Reserved Instance term with one upfront payment. This option provides you with the largest discount compared to On-Demand instance pricing.
Partial Upfront option: You make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term.
No Upfront option: Does not require any upfront payment and provides a discounted hourly rate for the duration of the term.
Here’s a sample calculation to see the price difference between a Standard RI and Convertible RI on various payment options for 1-year and 3-year terms:
As a general rule, Standard RI provides more savings than Convertible RI, which means that the former is the cost-effective option. The All Upfront option provides you with the largest discount compared with the other types. Opting for a longer compute reservation, such as the 3-year term, gives us greater discount as opposed to a shorter 1-year renewable term.
Hence, the correct answer is All Upfront.
Partial Upfront is incorrect because although it is more cost-effective than No Upfront option, its cost is higher compared with the All Upfront option.
No Upfront is incorrect because although it does not require any upfront payment and provides a discounted hourly rate for the duration of the term, it still costs higher than both the All Upfront and Partial Upfront options.
Deferred is incorrect because this is not an available option for Reserved Instance pricing.
Which of the following is used to enable instances in the public subnet to connect to the public Internet? A.NAT Gateway B/API Gateway C.Internet Gateway D.NAT instance
C.Internet Gateway
Explanation:
An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.
To enable communication over the internet for IPv4, your instance must have a public IPv4 address or an Elastic IP address that’s associated with a private IPv4 address on your instance. Your instance is only aware of the private (internal) IP address space defined within the VPC and subnet.
The Internet gateway logically provides the one-to-one NAT on behalf of your instance, so that when traffic leaves your VPC subnet and goes to the Internet, the reply address field is set to the public IPv4 address or Elastic IP address of your instance, and not its private IP address. Conversely, traffic that’s destined for the public IPv4 address or Elastic IP address of your instance has its destination address translated into the instance’s private IPv4 address before the traffic is delivered to the VPC.
Both NAT Gateways and NAT Instances are incorrect because these are simply used to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating connections with the instances.
API Gateway is incorrect since this is a service meant for creating, publishing, maintaining, monitoring, and securing APIs.
You are permitted to conduct security assessments and penetration testing without prior approval against which AWS resources? (Select TWO.)
A.Amazon S3
B.AWS Identity and Access Management (IAM)
C.AWS Security Token Service (STS)
D.Amazon Aurora
D.Amazon Aurora
Explanation
Cloud security at AWS is the highest priority. As an AWS customer, you will benefit from a data center and network architecture built to meet the requirements of the most security-sensitive organizations. An advantage of the AWS cloud is that it allows customers to scale and innovate, while maintaining a secure environment. Customers pay only for the services they use, meaning that you can have the security you need, but without the upfront expenses, and at a lower cost than in an on-premises environment.
AWS customers are welcome to carry out security assessments or penetration tests against their AWS infrastructure without prior approval to a few services only.
Permitted Services – You’re welcome to conduct security assessments against AWS resources that you own if they make use of the services listed below. Take note that AWS is constantly updating this list:
- Amazon EC2 instances, NAT Gateways, and Elastic Load Balancers
- Amazon RDS
- Amazon CloudFront
- Amazon Aurora
- Amazon API Gateways
- AWS Lambda and Lambda Edge functions
- Amazon Lightsail resources
- Amazon Elastic Beanstalk environments
Prohibited Activities – The following activities are prohibited at this time:
- DNS zone walking via Amazon Route 53 Hosted Zones
- Denial of Service (DoS), Distributed Denial of Service (DDoS), Simulated DoS, Simulated DDoS
- Port flooding
- Protocol flooding
- Request flooding (login request flooding, API request flooding)
Hence, the correct answers are: Amazon RDS and Amazon Aurora.
All other options are incorrect since they are not included in the list shown above.
Which of the following is the benefit of using Amazon Relational Database Service (Amazon RDS) over traditional database management?
A.It is give times faster than standard MySQL databses and three times faster than standard PostgreSQL Database
B.Automatically scales up the instances type of your RDS cluster based in demand
C.Lower administrative burden through automatic software patching and maintenance of the underlying operating system
D.Automatically apply both client-side and server-side encryption to your data by default
C.Lower administrative burden through automatic software patching and maintenance of the underlying operating system
Explanation:
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.
Amazon RDS is available on several database instance types - optimized for memory, performance or I/O - and provides you with several database engines to choose from, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. You can use the AWS Database Migration Service to easily migrate or replicate your existing databases to Amazon RDS
You can use the AWS Management Console, the Amazon RDS Command Line Interface, or simple API calls to access the capabilities of a production-ready relational database in minutes. Amazon RDS database instances are pre-configured with parameters and settings appropriate for the engine and class you have selected. You can launch a database instance and connect your application within minutes. DB Parameter Groups provide granular control and fine-tuning of your database.
Amazon RDS will make sure that the relational database software powering your deployment stays up-to-date with the latest patches. You can exert optional control over when and if your database instance is patched.
Hence, the correct answer is: Lower administrative burden through automatic software patching and maintenance of the underlying operating system.
The option that says: Automatically apply both client-side and server-side encryption to your data by default is incorrect because this is not done by RDS at all. In RDS, you can manually configure your database cluster in order to secure your data at rest or in transit but this is not done automatically by default.
The option that says: Automatically scales up the instance type of your RDS cluster based on demand is incorrect because in RDS, you still have to manually upgrade the underlying instance type of your database cluster in order to scale it up.
The option that says: It is five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases is incorrect because this is not a feature of Amazon RDS but of Amazon Aurora.
Which of the following should you use if you need to provide temporary AWS credentials for users who have been authenticated via their social media logins as well as for guest users who do not require any authentication? A.Amazon Cognito User Pool B.Amazon COgnito Sync C.Amazon Cognito Identity Pool D.AWS Single Sign-On
C.Amazon Cognito Identity Pool
Explanation:
Amazon Cognito identity pools provide temporary AWS credentials for users who are guests (unauthenticated) and for users who have been authenticated and received a token. An identity pool is a store of user identity data specific to your account.
Amazon Cognito identity pools enable you to create unique identities and assign permissions for users. Your identity pool can include:
- Users in an Amazon Cognito user pool
- Users who authenticate with external identity providers such as Facebook, Google, or a SAML-based identity provider
- Users authenticated via your own existing authentication process
With an identity pool, you can obtain temporary AWS credentials with permissions you define to directly access other AWS services or to access resources through Amazon API Gateway.
Hence, the correct answer is to use Amazon Cognito Identity Pool.
Amazon Cognito User Pool is incorrect because a user pool is just a user directory in Amazon Cognito. In addition, it doesn’t enable access to unauthenticated identities. You have to use an Identity Pool instead.
Amazon Cognito Sync is incorrect because this is just a client library that enables cross-device syncing of application-related user data.
AWS Single Sign-On is incorrect because this service just makes it easy for you to centrally manage SSO access to multiple AWS accounts. It also does not allow any “guest” or unauthenticated access, unlike Amazon Cognito.
Which of the following is a valid characteristic of an IAM Group?
A.There’s no limit to the number of groups you can have
B.A group can contain many users, and a user can belong to multiple groups
C.There is a default group that automatically includes all users in the AWS account
D.Groups can be nested
B.A group can contain many users, and a user can belong to multiple groups
Explanation:
An IAM group is a collection of IAM users. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users. For example, you could have a group called Admins and give that group the types of permissions that administrators typically need. Any user in that group automatically has the permissions that are assigned to the group.
If a new user joins your organization and needs administrator privileges, you can assign the appropriate permissions by adding the user to that group. Similarly, if a person changes jobs in your organization, instead of editing that user’s permissions, you can remove him or her from the old groups and add him or her to the appropriate new groups.
Note that a group is not truly an “identity” in IAM because it cannot be identified as a Principal in a permission policy. It is simply a way to attach policies to multiple users at one time.
Following are some important characteristics of groups:
- A group can contain many users, and a user can belong to multiple groups.
- Groups can’t be nested; they can contain only users, not other groups.
- There’s no default group that automatically includes all users in the AWS account. If you want to have a group like that, you need to create it and assign each new user to it.
- There’s a limit to the number of groups you can have, and a limit to how many groups a user can be in.
Based on the above paragraph, the correct answer is: A group can contain many users, and a user can belong to multiple groups.
The option that says: Groups can be nested is incorrect since this is not allowed in IAM Groups.
The option that says: There’s no limit to the number of groups you can have is incorrect because there is actually a certain limit to the number of groups you can have as well as a limit to how many groups a user can be in.
The option that says: There is a default group that automatically includes all users in the AWS account is incorrect because there is no such thing as this in IAM Group.
Which is a machine learning-powered security service that discovers, classifies, and protects sensitive data such as personally identifiable information (PII) or intellectual property? A.Amazon Rekognition B.Amazon Cognito C.Amazon GuardDuty D.Amazon Macie
D.Amazon Macie
Explanation:
Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property, and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved. The fully managed service continuously monitors data access activity for anomalies, and generates detailed alerts when it detects risk of unauthorized access or inadvertent data leaks.
You can use Amazon Macie to protect against security threats by continuously monitoring your data and account credentials. Amazon Macie gives you an automated and low touch way to discover and classify your business data and detect sensitive information such as personally identifiable information (PII) and credential data. When alerts are generated, you can use Amazon Macie for incident response, using Amazon CloudWatch Events to swiftly take action to protect your data.
Hence, the correct answer is Amazon Macie.
Amazon Rekognition is incorrect because although it is also a machine learning-based service like Amazon Macie, it is primarily used for image and video analysis. You can’t use this to protect your sensitive data in AWS.
Amazon GuardDuty is incorrect because this is just a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads.
Amazon Cognito is incorrect because this is primarily used if you want to add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily.