Start Practice Test 6: AWS Certified Cloud Practitioner Practice Exam (6) Flashcards

1
Q

You have developed a web application targeting a global audience. Which of the following will help you achieve the highest redundancy and fault tolerance from an infrastructure perspective?

  • Deploy the application in multiple Availability Zones in a Single AWS Region
  • Deploy the application in a single Availability Zone
  • There is no need to architect for these capabilities in AWS, as AWS is redundant by default
  • Deploy the application in multiple Availability Zones in multiple AWS Regions
A

Deploy the application in multiple Availability Zones in a Single AWS Region

Explanation
Since you are targeting a global audience, you should leverage AWS global regions to serve content to your users. The deployment option that gives you the highest redundancy is to deploy the application in multiple Availability Zones within multiple AWS regions. This redundancy will also increase the fault tolerance of the application because if there is an outage in a single Availability Zone, the other Availability Zones can handle requests.

Additional information:

   It is important to understand that the AWS Cloud infrastructure is built around Regions and Availability Zones (AZs). A Region is a geographical location that contains multiple Availability Zones. Each AWS Region is designed to be completely isolated from the other AWS Regions. This achieves the greatest possible fault tolerance and stability.

   An Availability Zone is a data center, or data centers, that are completely isolated from the other Availability Zones. Each AWS Region has at least two Availability Zones; most have three. Each Availability Zone is engineered to be independent from failures in other Availability Zones. Deploying your resources across multiple Availability Zones offer you the ability to operate production applications and databases that are more resilient, highly available, and scalable than would be possible from a single data center.

References:

https://d1.awsstatic.com/whitepapers/aws-overview.pdf

https://aws.amazon.com/about-aws/global-infrastructure/regions_az/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the following services is used when encrypting EBS volumes?

  • AWS KMS
  • AWS WAF
  • Amazon Macie
  • Amazon GuardDuty
A

AWS WAF

Explanation
Amazon EBS encryption offers a straight-forward encryption solution for your EBS volumes that does not require you to build, maintain, and secure your own key management infrastructure. You can configure Amazon EBS to use the AWS Key Management Service (AWS KMS) to create and control the encryption keys used to encrypt your data. AWS Key Management Service is also integrated with other AWS services including Amazon S3, and Amazon Redshift, to make it simple to encrypt and decrypt your data.

The other options are incorrect:

“Amazon GuardDuty” is incorrect. Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your AWS accounts and workloads. GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately.

“AWS WAF” is incorrect. AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.

“Amazon Macie” is incorrect. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect sensitive data stored in Amazon S3. Macie automatically detects a large and growing list of sensitive data types, including personally identifiable information (PII) such as names, addresses, and credit card numbers. Macie automatically provides an inventory of Amazon S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with other AWS accounts. Then, Macie applies machine learning and pattern matching techniques to the buckets you select to identify and alert you to sensitive data. Amazon Macie can also be used in combination with other AWS services, such as AWS Step Functions to take automated remediation actions. This can help you meet regulations, such as the General Data Privacy Regulation (GDPR).

References:

https://aws.amazon.com/kms/

https://aws.amazon.com/ebs/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following services is an AWS repository management system that allows for storing, versioning, and managing your application code?

  • AWS CodePipeline
  • AWS X-Ray
  • Amazon CodeGuru
  • AWS CodeCommit
A

AWS CodeCommit

Explanation
AWS CodeCommit is designed for software developers who need a secure, reliable, and scalable source control system to store and version their code. In addition, AWS CodeCommit can be used by anyone looking for an easy to use, fully managed data store that is version controlled. For example, IT administrators can use AWS CodeCommit to store their scripts and configurations. Web designers can use AWS CodeCommit to store HTML pages and images.

        AWS CodeCommit makes it easy for companies to host secure and highly available private Git repositories. Customers can use AWS CodeCommit to securely store anything from source code to binaries.

The other options are incorrect:

AWS CodePipeline is incorrect. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.

AWS X-Ray is incorrect. AWS X-Ray is a service that collects data about requests that your application serves, and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization.

Amazon CodeGuru is incorrect. Amazon CodeGuru is a developer tool that provides intelligent recommendations to improve code quality and identifying an application’s most expensive lines of code.

References:

https://d1.awsstatic.com/whitepapers/aws-overview.pdf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What factors determine how you are charged when using AWS Lambda? (Choose TWO)

  • Compute time consumed
  • Number of volumes
  • Placement Groups
  • Storage consumed
  • Number of request to your functions
A

**Compute time consumed
Compute time consumed
**

Explanation
With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the time it takes for your code to execute.

The other options are incorrect:

“Placement groups”is incorrect. Placement Groups are logical groupings or clusters of EC2 instances within a single Availability Zone.

“Storage consumed” and “Number of volumes” are incorrect. Lambda is not a storage service. It is a compute service to run your applications.

References:

https://docs.aws.amazon.com/whitepapers/latest/how-aws-pricing-works/how-aws-pricing-works.pdf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

According to the AWS shared responsibility model, what are the controls that customers fully inherit from AWS? (Choose TWO)

  • Awareness and Training
  • Resource Configuration Management
  • Data Center security Controls
  • Environmental controls
  • Communication Controles
A

**Data Center security Controls
Awareness and Training
**

Explanation
AWS is responsible for physical controls and environmental controls. Customers inherit these controls from AWS.

    As mentioned in the AWS Shared Responsibility Model page, Inherited Controls are controls which a customer fully inherits from AWS such as physical controls and environmental controls.

    As a customer deploying an application on AWS infrastructure, you inherit security controls pertaining to the AWS physical, environmental and media protection, and no longer need to provide a detailed description of how you comply with these control families.

    For example: You have built an application in AWS for customers to securely store their data, but your customers are concerned about the security of the data and ensuring compliance requirements are met. To address this, you assure your customer that “our company does not host customer data in its corporate or remote offices, but rather in AWS data centers that have been certified to meet industry security standards.” That includes physical and environmental controls to secure the data, which is the responsibility of Amazon. Customers of AWS do not have physical access to the AWS data centers, and as such, they fully inherit the physical and environmental security controls from AWS.

You can read more about AWS’ data center controls here:

https://aws.amazon.com/compliance/data-center/controls/

The other options are incorrect:

“Communications controls” is incorrect. Communications controls are the responsibility of the customer.

“Awareness and Training” is incorrect. Awareness and Training belongs to the AWS Shared Controls. AWS trains AWS employees, but a customer must train their own employees.

“Resource Configuration Management” is incorrect. Configuration management belongs to the AWS Shared Controls. AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.

References:

https://aws.amazon.com/compliance/shared-responsibility-model/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the benefits of the AWS Marketplace service? (Choose TWO)

  • Protects customer by performing security checks on listed products
  • Provides Cheaper options for purchasing Amazon EC2 on-demand instances
  • Provides flexible pricing options that suite most customer needs
  • Per-second billing
  • Provides software solutions that run on AWS or any other Cloud Vendor
A

**Protects customer by performing security checks on listed products
Protects customer by performing security checks on listed products
**

Explanation
The AWS Marketplace is a curated digital catalog that makes it easy for customers to find, buy, and immediately start using the software and services that customers need to build solutions and run their businesses. The AWS Marketplace includes thousands of software listings from popular categories such as security, networking, storage, machine learning, business intelligence, database, and DevOps. AWS Marketplace is designed for Independent Software Vendors (ISVs), Value-Added Resellers (VARs), and Systems Integrators (SIs) who have software products they want to offer to customers in the cloud. Partners use AWS Marketplace to be up and running in days and offer their software products to customers around the world.

The AWS Marketplace provides value to buyers in several ways:

1- It simplifies software licensing and procurement with flexible pricing options and multiple deployment methods. Flexible pricing options include free trial, hourly, monthly, annual, multi-year, and BYOL.

2- Customers can quickly launch pre-configured software with just a few clicks, and choose software solutions in AMI and SaaS formats, as well as other formats.

3- It ensures that products are scanned periodically for known vulnerabilities, malware, default passwords, and other security-related concerns.

The other options are incorrect:

“Provides cheaper options for purchasing Amazon EC2 on-demand instances” is incorrect. The AWS marketplace cannot be used to buy Amazon EC2 on-demand instances.

“Provides software solutions that run on AWS or any other Cloud vendor” is incorrect. The AWS Marketplace provides software solutions that run on AWS only.

“Per-second billing” is incorrect. The AWS marketplace pricing options include free trial, hourly, monthly, annual, multi-year, and BYOL. Per-second billing is found on AWS resources and services only. It is not found in the marketplace.

References:

https://aws.amazon.com/marketplace

https://docs.aws.amazon.com/marketplace/latest/userguide/what-is-marketplace.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What should you consider when storing data in Amazon Glacier?

  • Pick the right Glacier classs based on your retrieval needs
  • Amazo Glacier oonly accepts data in a compressed format
  • Attach Glacier to an EC2 instance to be ablt to store data
  • Glacier can only be used to stoe frequently accessed data and data archives
A

Pick the right Glacier classs based on your retrieval needs

Explanation
AWS customers use Amazon Glacier to backup large amounts of data at very low costs. There are three different storage classes for Amazon Glacier: Amazon S3 Glacier Instant Retrieval, Amazon S3 Glacier Flexible Retrieval, and Amazon S3 Glacier Deep Archive.

Choosing between S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, or S3 Glacier Deep Archive depends on how quickly you must retrieve your data. S3 Glacier Instant Retrieval delivers the fastest access to archive storage, with the same throughput and milliseconds access as the S3 Standard and S3 Standard-IA storage classes. With S3 Glacier Flexible Retrieval, you can retrieve your data within a few minutes to several hours (1-5 minutes to 12 hours), whereas with S3 Glacier Deep Archive, the minimum retrieval period is 12 hours.

For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval (formerly S3 Glacier), with retrieval in minutes or free bulk retrievals in 5 - 12 hours. To save even more on long-lived archive storage such as compliance archives and digital media preservation, choose S3 Glacier Deep Archive, the lowest cost storage in the cloud with data retrieval from 12 - 48 hours.

The other options are incorrect:

“Amazon Glacier only accepts data in a compressed format” is incorrect. You can store virtually any kind of data in any format. But your costs will be lower if you aggregate and compress your data.

“Attach Glacier to an EC2 Instance to be able to store data” is incorrect. Glacier cannot be attached to EC2 instances. Glacier is a storage class of S3.

The storage service that AWS customers can use to attach storage volumes to an Amazon EC2 instance is Amazon EBS. An Amazon EBS volume is a durable, block-level storage device that you can attach to your EC2 instances. After you attach a volume to an instance, you can use it as you would use a physical hard drive. AWS recommends Amazon EBS for data that must be quickly accessible and requires long-term persistence. EBS volumes are particularly well-suited for use as the primary storage for operating systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage.

“Glacier can only be used to store frequently accessed data and data archives” is incorrect. Glacier is not for frequently accessed data.

References:

https://aws.amazon.com/s3/storage-classes/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A financial services company decides to migrate one of its applications to AWS. The application deals with sensitive data, such as credit card information, and must run on a PCI-compliant environment. Which of the following is the company’s responsibility when building a PCI-compliant environment in AWS? (Choose TWO)

  • Ensure that all PCI DSS pgysical security requirements are met
  • Configure the underlying infrastructure of AWS services to meet all PCI DSS requirements
  • Start the miogration process immediately as all AWS service are PCI complaint
  • Restrict any access to cardholder data and create a polict that addresses information security for all personel
  • Ensure that AWS service are configured properly to meet all PCI DSS standards
A

Restrict any access to cardholder data and create a polict that addresses information security for all personel
Ensure that AWS service are configured properly to meet all PCI DSS standards

Explanation
The Payment Card Industry Data Security Standard (PCI DSS) helps ensure that companies maintain a secure environment for storing, processing, and transmitting credit card information or sensitive authentication data (SAD). AWS customers who use AWS services to store, process, or transmit cardholder data can rely on AWS infrastructure as they manage their own PCI DSS compliance certification.

 Security and compliance are important shared responsibilities between AWS and the customer. It is the customer’s responsibility to maintain their PCI DSS cardholder data environment (CDE) and scope, and be able to demonstrate compliance of all PCI controls, but customers are not alone in this journey. The use of PCI DSS compliant AWS services can facilitate customer compliance, and the AWS Security Assurance Services team can assist customers with additional information specific to demonstrating the PCI DSS compliance of their AWS workloads.

 AWS Services listed as PCI DSS compliant means that they can be configured by customers to meet their PCI DSS requirements. It does not mean that any use of that service is automatically compliant. A good rule-of-thumb is that if a customer can set a particular configuration, they are responsible for setting it appropriately to meet PCI DSS requirements. AWS customers are also responsible for creating a policy that addresses information security for all personnel, and implementing strong access controls to restrict any access to cardholder data.

The other options are incorrect:

“Ensure that all PCI DSS physical security requirements are met” is incorrect. AWS is responsible for the security and compliance of its physical infrastructure, including the PCI DSS requirements.

“Start the migration process immediately as all AWS services are PCI compliant” is incorrect. Only certain AWS services are in-scope for PCI compliance. You can find a full list of in-scope services here. https://aws.amazon.com/compliance/services-in-scope/

“Configure the underlying infrastructure of AWS services to meet all applicable requirements of PCI DSS” is incorrect. Configuring the underlying infrastructure of AWS services is the responsibility of AWS, not the customer. If a customer is using one of the services that are in-scope for PCI DSS, the entire infrastructure that supports these services is compliant.

References:

https://d1.awsstatic.com/whitepapers/compliance/pci-dss-compliance-on-aws.pdf

https://aws.amazon.com/compliance/shared-responsibility-model/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which statement is true in relation to security in AWS?

  • Server-side encryption is the responibility of AWS
  • AWS customers are responsible for patching any database software running on Amazon EC2
  • For severless data stores such as Amazon S3, the customer is responsible for patching the operating system
  • AWS is responsible for the security of your application
A

AWS customers are responsible for patching any database software running on Amazon EC2

Explanation
AWS customers have two options to host their databases on AWS:

1- Using a managed database:

AWS Customers can use managed databases such as Amazon RDS to host their databases. In this case, AWS is responsible for performing all database management tasks such as hardware provisioning, patching, setup, configuration, backups, or recovery.

2- Installing a database software on Amazon EC2:

Instead of using a managed database, AWS customers can install any database software they want on Amazon EC2 and host their databases. In this case, Customers are responsible for performing all of the necessary configuration and management tasks.

Note: For Amazon RDS, all security patches and updates are applied automatically to the database software once they are released. But for databases installed on Amazon EC2, customers are required to apply the security patches and the updates manually or use the AWS Systems Manager service to apply them on a scheduled basis (every week, for example).

The other options are incorrect:

“For serverless data stores such as Amazon S3, the customer is responsible for patching the operating system” is incorrect. Amazon S3 is a serverless data store service that stores customer data without requiring management of underlying storage infrastructure. Amazon S3 enables customers to offload the administrative burdens of operating and scaling storage to AWS so that they do not have to worry about hardware provisioning, operating system patching, or maintenance of the platform.

AWS is responsible for most of the configuration and management tasks, but customers are still responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

Note:

A serverless service is a service that does not require the customer to manage the infrastructure layer, the operating system layer, or the platform layer. A serverless service can be a compute service such as AWS Lambda, an integration service such as Amazon SQS, or a data store service such as Amazon S3.

Read more about serverless services on AWS here:

https://aws.amazon.com/serverless/

“AWS is responsible for the security of your application” is incorrect. It is the responsibility of the customer to build secure applications.

“Server-side encryption is the responsibility of AWS” is incorrect. It is the responsibility of the customer to encrypt data either on the client side or on the server side.

References:

https://aws.amazon.com/compliance/shared-responsibility-model/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How can you protect data stored on Amazon S3 from accidental deletion?

  • By enabling S3 Versioning
  • By disabling S3 Cross-Region Replication (CRR)
  • By configuring S3 Lifecycle Policies
  • By configuring S3 Bucket Policies
A

By enabling S3 Versioning

Explanation
Versioning in Amazon S3 is a means of keeping multiple variants of an object in the same bucket. You can use the S3 Versioning feature to preserve, retrieve, and restore every version of every object stored in your buckets. With versioning, you can recover more easily from both unintended user actions and application failures.

 Versioning-enabled buckets can help you recover objects from accidental deletion or overwrite. For example, if you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version. Also, If you overwrite an object, it results in a new object version in the bucket. You can always restore the previous version.

The other options are incorrect:

“By disabling S3 Cross-Region Replication (CRR)” is incorrect. S3 Cross-Region Replication (CRR) is an Amazon S3 feature that enables customers to replicate data across different AWS Regions; to minimize latency for global users and\or meet compliance requirements. Disabling S3 Cross-Region Replication (CRR) does not help protect data from accidental deletion.

“By configuring S3 lifecycle policies” is incorrect. With S3 Lifecycle configuration rules, you can tell Amazon S3 to transition objects to less expensive storage classes, or archive or delete them. In order to reduce your Amazon S3 costs, you should create a lifecycle policy to automatically move old (or infrequently accessed) files to less expensive storage tiers, or to automatically delete them after a specified duration. The S3 Lifecycle feature is not meant to protect from accidental deletion of data.

“By configuring S3 Bucket Policies” is incorrect. A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. A Bucket Policy defines who can access a bucket, but does not help if an authorized user accidentally deleted objects in that bucket.

References:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which feature enables users to sign into their AWS accounts with their existing corporate credentials?

  • IAM Permissions
  • Federation
  • WAF Rules
  • Access Keys
A

Federation

Explanation
With Federation, you can use single sign-on (SSO) to access your AWS accounts using credentials from your corporate directory. Federation uses open standards, such as Security Assertion Markup Language 2.0 (SAML), to exchange identity and security information between an identity provider (IdP) and an application.

AWS offers multiple options for federating your identities in AWS:

1- AWS Identity and Access Management (IAM): You can use AWS Identity and Access Management (IAM) to enable users to sign in to their AWS accounts with their existing corporate credentials.

2- AWS IAM Identity Center (Successor to AWS Single Sign-On): AWS IAM Identity Center makes it easy to centrally manage federated access to multiple AWS accounts and business applications and provide users with single sign-on access to all their assigned accounts and applications from one place.

3- AWS Directory Service: AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, uses secure Windows trusts to enable users to sign in to the AWS Management Console, AWS Command Line Interface (CLI), and Windows applications running on AWS using their existing corporate Microsoft Active Directory credentials.

The other options are incorrect:

“WAF rules” is incorrect. AWS WAF is a web application firewall that helps protect web applications from attacks by allowing you to configure rules that block malicious traffic.

You use WAF rules in a web ACL to block web requests based on criteria like the following:

  • Scripts that are likely to be malicious. Attackers embed scripts that can exploit vulnerabilities in web applications. This is known as cross-site scripting (XSS).
  • Malicious requests from a set of IP addresses or address ranges.
  • SQL code that is likely to be malicious. Attackers try to extract data from your database by embedding malicious SQL code in a web request. This is known as SQL injection.

“IAM Permissions” is incorrect. IAM Permissions let you specify the desired access to AWS resources. Permissions are granted to IAM entities (users, user groups, and roles) and by default these entities start with no permissions. In other words, IAM entities can do nothing in AWS until you grant them your desired permissions.

“Access keys” is incorrect. Access keys are long-term credentials for an AWS IAM user or the AWS account root user. Access keys are not used for signing in to your account. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK).

References:

https://aws.amazon.com/identity/federation/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following AWS Support Plans gives you 24/7 access to Cloud Support Engineers via email & phone? (Choose TWO)

  • Enterprise
  • Business
  • Standard
  • Developer
  • Premium
A

** Enterprise
Business **

Explanation
For Technical Support, each of the Business, Enterprise On-Ramp, and Enterprise support plans provides 24x7 phone, email, and chat access to Support Engineers.

The other options are incorrect:

“Premium” and “Standard “are incorrect. Premium and Standard are not valid support plans on AWS.

“Developer” is incorrect. This plan does not include phone support 24/7.

References:

https://aws.amazon.com/premiumsupport/compare-plans/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the maximum amount of data that can be stored in S3 in a single AWS account?

  • 10 PetaBytes
  • 5 TeraBytes
  • 100 PetaBytes
  • Virtually Unlimited Storage
A

Virtually Unlimited Storage

Explanation
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which of the following security resources are available to any user for free? (Choose TWO)

  • AWS Bulletin
  • AWS Support API
  • AWS TAM
  • AWS Classroom Training
  • AWS Security Blog
A

**AWS Bulletin
AWS Bulletin
**

Explanation
The AWS free security resources include the AWS Security Blog, Whitepapers, AWS Developer Forums, Articles and Tutorials, Training, Security Bulletins, Compliance Resources and Testimonials.

The other options are incorrect.

“AWS Classroom Training” is incorrect. AWS provides live classes (Classroom Training) with accredited AWS instructors who teach you in-demand cloud skills and best practices using a mix of presentations, discussion, and hands-on labs. AWS Classroom Training is not free.

“AWS Support API” is incorrect. AWS Support API is available for AWS customers who have a Business, Enterprise On-Ramp, or Enterprise support plan. The AWS Support API provides programmatic access to AWS Support Center features to create, manage, and close support cases.

“AWS TAM” is incorrect. A Technical Account Manager (TAM) is your designated technical point of contact who provides advocacy and guidance to help plan and build solutions using best practices and proactively keep your AWS environment operationally healthy and secure. TAM is available only for AWS customers who have an Enterprise On-Ramp or Enterprise support plan.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which of the following actions may reduce Amazon EBS costs? (Choose TWO)

  • Deleting unsused Bucket ACL’s
  • Using reservations
  • Deleting unnecessary snapshots
  • Distributing request to multiple volumes
  • Changing the type of the volume
A

Deleting unnecessary snapshots
Changing the type of the volume

Explanation
With Amazon EBS, it is important to keep in mind that you are paying for provisioned capacity and performance, even if the volume is unattached or has very low write activity. To optimize storage performance and costs for Amazon EBS, monitor volumes periodically to identify unattached, underutilized or overutilized volumes, and adjust provisioning to match actual usage.

When you want to reduce the costs of Amazon EBS consider the following:

1- Delete Unattached Amazon EBS Volumes:

An easy way to reduce wasted spend is to find and delete unattached volumes. However, when EC2 instances are stopped or terminated, attached EBS volumes are not automatically deleted and will continue to accrue charges since they are still operating.

2- Resize or Change the EBS Volume Type:

Another way to optimize storage costs is to identify volumes that are underutilized and downsize them or change the volume type.

3- Delete Stale Amazon EBS Snapshots:

If you have a backup policy that takes EBS volume snapshots daily or weekly, you will quickly accumulate snapshots. Check for stale snapshots that are over 30 days old and delete them to reduce storage costs.

The other options are incorrect:

“Deleting unused Bucket ACLs” is incorrect. Amazon EBS does not use buckets. Buckets are used in S3 storage. Amazon S3 Bucket ACLs enable you to manage access to buckets. Each bucket has an ACL attached to it as a subresource. You can use Bucket ACLs to grant basic read/write permissions to other AWS accounts.

Note: You have three options to control access to an Amazon S3 Bucket:

1- IAM Policies

2- Bucket Policies

3- Bucket ACLs

“Distributing requests to multiple volumes” is incorrect. Amazon EBS is a storage service, not a compute service.

“Using reservations” is incorrect. There are no reservations in Amazon EBS independent of Amazon EC2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which pillar of the AWS Well-Architected Framework provides recommendations to help customers select the right compute resources based on workload requirements?

  • Operational Exellence
  • Security
  • Reliability
  • Performance Efficiency
A

Performance Efficiency

Explanation
The AWS Well-Architected Framework describes the key concepts, design principles, and architectural best practices for designing and running workloads in the cloud.

The six Pillars of the AWS Well-Architected Framework: (IMPORTANT)

1- Operational Excellence

2- Security

3- Reliability

4- Performance Efficiency

5- Cost Optimization

6- Sustainability

The correct answer is: Performance Efficiency

   The performance efficiency pillar includes the ability to use computing resources efficiently to meet system requirements. Key topics include selecting the right resource types and sizes based on workload requirements, monitoring performance, and making informed decisions to maintain efficiency as business needs evolve.

The other options are incorrect:

“Reliability” is incorrect. The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. A resilient workload quickly recovers from failures to meet business and customer demand. Key topics include distributed system design, recovery planning, and how to handle change.

“Operational Excellence” is incorrect. The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. Key topics include automating changes, responding to events, and defining standards to manage daily operations.

“Security” is incorrect. The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. Key topics include confidentiality and integrity of data, identifying and managing who can do what with privilege management, protecting systems, and establishing controls to detect security events.

References: https://aws.amazon.com/architecture/well-architected/

17
Q

AWS provides disaster recovery capability by allowing customers to deploy infrastructure into multiple ___________ .

  • Transportation devices
  • Edge Locations
  • Support Plans
  • Regions
A

Regions

Explanation
Businesses are using the AWS cloud to enable faster disaster recovery of their critical IT systems without incurring the infrastructure expense of a second physical site. The AWS cloud supports many popular disaster recovery architectures from “pilot light” environments that may be suitable for small customer workload data center failures to “hot standby” environments that enable rapid failover at scale. With data centers in Regions all around the world, AWS provides a set of cloud-based disaster recovery services that enable rapid recovery of your IT infrastructure and data.

The other options are incorrect:

“Transportation devices” is incorrect. AWS uses storage transportation devices, like AWS Snowball and Snowmobile to allow companies transfer data to the cloud.

“Support plans” is incorrect. AWS provides multiple support plans to meet the different support requirements of its customers.

“Edge locations” is incorrect. AWS edge locations are used by the CloudFront service to cache and serve content to end-users from a nearby geographical location to reduce latency.

References:

https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-workloads-on-aws.pdf#disaster-recovery-options-in-the-cloud

18
Q

What is the main benefit of attaching security groups to an Amazon RDS instance?

  • Deploy SSL/TLS certificates for use with your database instance
  • Manages user access and encrytion keys
  • Controls what IP address ranges can connect to your database instance
  • Distribute incoming teraffic across multiple targets
A

Controls what IP address ranges can connect to your database instance

Explanation
In Amazon RDS, security groups are used to control which IP address ranges can connect to your databases on a DB instance. When you initially create a DB instance, its firewall prevents any database access except through rules specified by an associated security group.

References:

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.html

19
Q

Which database service should you use if your application and data schema require “joins” or complex transactions?

  • AWS Outpost
  • Amazon RDS
  • Amazon DynamoDB
  • Amazon Document DB
A

Amazon RDS

Explanation
If your database’s schema cannot be denormalized, and your application requires joins or complex transactions, consider using a relational database such as Amazon RDS.

The other options are incorrect:

“Amazon DynamoDB” is incorrect. A NoSQL database such as Amazon DynamoDB is a type of non-relational database that uses a simple key-value method to store and retrieve data. DynamoDB does not support complex relational queries such as joins or complex transactions.

“Amazon DocumentDB” is incorrect. Document databases such as Amazon DocumentDB are designed to store semi-structured data as documents. Document databases do not support complex relational queries such as joins or complex transactions.

“AWS Outposts” is incorrect. AWS Outposts is an AWS service that delivers the same AWS infrastructure, native AWS services, APIs, and tools to virtually any customer on premises facility. With AWS Outposts, customers can run AWS services locally on their Outpost, including EC2, EBS, ECS, EKS, and RDS, and also have full access to services available in the Region. Customers can use AWS Outposts to securely store and process data that needs to remain on premises or in countries where there is no AWS region. AWS Outposts is ideal for applications that have low latency or local data processing requirements, such as financial services, healthcare, etc.

References:
https://aws.amazon.com/products/databases/
https://aws.amazon.com/rds/

20
Q

For some services, AWS automatically replicates data across multiple Availability Zones to provide fault tolerance in the event of a server failure or Availability Zone outage. Select TWO services that automatically replicate data across Availability Zones.

  • Amazon RDS for Oracle
  • Amazon Route 53
  • Amazon Aurora
  • Instance Store
  • S3
A

Amazon Aurora
S3

Explanation
For S3 Standard, S3 Standard-IA, and S3 Glacier storage classes, your objects are automatically stored across multiple devices spanning a minimum of three Availability Zones, each on different power grids within an AWS Region. This means your data is available when needed and protected against AZ failures.

  Amazon Aurora is an Amazon RDS database engine. All of your data in Amazon Aurora is automatically replicated across three Availability Zones within an AWS region, providing built-in high availability and data durability.

  Other Amazon RDS database engines (PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server) do not replicate data automatically. To protect from data loss when using any of these engines, you need to manually enable the Multi-AZ feature. In a Multi-AZ Deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. If you encounter problems with the primary copy, Amazon RDS automatically switches to the standby copy to provide continued availability to the data.

The other options are incorrect:

“Instance Store” is incorrect. An instance store provides temporary block-level storage for EC2 instances. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content.

“Amazon Route 53” is incorrect. Amazon Route 53 is not used for storing data. It is a globally available, cloud-based Domain Name System (DNS) web service not tied to Availability Zones.

“Amazon RDS for Oracle” is incorrect. Amazon RDS for Oracle does not automatically replicate data. Amazon RDS supports six database engines (Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server). Amazon Aurora is the only database engine that replicates data automatically across three Availability Zones. For other database engines, you must enable the “Multi-AZ” feature manually. In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a standby copy of your data in a different Availability Zone. If a storage volume on your primary instance fails, Amazon RDS automatically initiates a failover to the up-to-date standby.

References:

htps://aws.amazon.com/rds/aurora/
https://aws.amazon.com/s3/faqs/
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

21
Q

Which of the following are part of the seven design principles for security in the cloud? (Choose TWO)

  • Enable real-time traceability
  • Never store sensitive data in the cloud
  • Use IAM roles to grant temporary access instead of long-term credentials
  • Scale horizontally to protect from failures
  • Use manual monitoring techniques to protect your AWS resources
A

Enable real-time traceability
Use IAM roles to grant temporary access instead of long-term credentials

Explanation
There are seven design principles for security in the cloud:

1- Implement a strong identity foundation: Implement the principle of least privilege and enforce separation of duties with appropriate authorization for each interaction with your AWS resources. Centralize privilege management and reduce or even eliminate reliance on long-term credentials.

2- Enable traceability: Monitor, alert, and audit actions and changes to your environment in real time. Integrate logs and metrics with systems to automatically respond and take action.

3- Apply security at all layers: Rather than just focusing on protection of a single outer layer, apply a defense-in-depth approach with other security controls. Apply to all layers (e.g., edge network, VPC, subnet, load balancer, every instance, operating system, and application).

4- Automate security best practices: Automated software-based security mechanisms improve your ability to securely scale more rapidly and cost effectively. Create secure architectures, including the implementation of controls that are defined and managed as code in version-controlled templates.

5- Protect data in transit and at rest: Classify your data into sensitivity levels and use mechanisms, such as encryption, tokenization, and access control where appropriate.

6- Keep people away from data: Create mechanisms and tools to reduce or eliminate the need for direct access or manual processing of data. This reduces the risk of loss or modification and human error when handling sensitive data.

7- Prepare for security events: Prepare for an incident by having an incident management process that aligns to your organizational requirements. Run incident response simulations and use tools with automation to increase your speed for detection, investigation, and recovery.

The other options are incorrect:

“Scale horizontally to protect from failures” is incorrect. Protecting from networking failures due to hardware issues or mis-configuration is not related to security. Protecting from failures and scaling horizontally are much more related to the reliability of your system.

“Never store sensitive data in the cloud” is incorrect. AWS provides encryption and access control tools that allow you to easily encrypt your data in transit and at rest and help ensure that only authorized users can access it.

“Use manual monitoring techniques to protect your AWS resources” is incorrect. Automating security tasks on AWS enables you to be more secure. For example, you can automate infrastructure and application security checks to continually enforce your security and compliance controls and help ensure confidentiality, integrity, and availability at all times.

References:

https://docs.aws.amazon.com/wellarchitected/latest/framework/wellarchitected-framework.pdf

22
Q

Each AWS Region is composed of multiple Availability Zones. Which of the following best describes what an Availability Zone is?

  • It is a distinct location within a region that is insulated from the failures in other Availability Zones
  • It is a collection of data centers distributed in multiple countries
  • It is a collection of Local Zones designed to be completely isolated from each other
  • It is a logically isolated network of the AWS Cloud
A

** It is a distinct location within a region that is insulated from the failures in other Availability Zones**

Explanation
Availability Zones are distinct locations within a region that are insulated from failures in other Availability Zones.

Note:

Although Availability Zones are insulated from failures in other Availability Zones, they are connected through private, low-latency links to other Availability Zones in the same region.

The other options are incorrect:

“It is a collection of data centers distributed in multiple countries” is incorrect. An Availability Zone is a collection of data centers located in one AWS Region.

“It is a logically isolated network of the AWS Cloud” is incorrect. This statement describes Amazon VPC.

“It is a collection of Local Zones designed to be completely isolated from each other” is incorrect. An Availability Zone consists of one or more discrete data centers located in one AWS Region.

A Local Zone is an extension of an AWS Region in geographic proximity to your users. With AWS Local Zones, you can easily run highly-demanding applications that require single-digit millisecond latencies to your end-users, such as real-time gaming, hybrid migrations, AR/VR, and machine learning. AWS Local Zones enable you to comply with state and local data residency requirements in sectors such as healthcare, financial services, iGaming, and government.

AWS Local Zones are connected to the parent region via Amazon’s redundant and very high bandwidth private network, giving applications running in AWS Local Zones fast, secure, and seamless access to the full range of in-region services through the same APIs and tool sets.

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

23
Q

Which of the following is a benefit of the “Loose Coupling” architecture principle?

  • It allows individual application components or services to be modified without affecting other components
  • It eliminates the need for change management
  • It allows for Cross-Region Replication
  • It helps AWS customers reduce Privileged Access to AWS resources
A

It allows individual application components or services to be modified without affecting other components

Explanation
As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies - a change or a failure in one component should not cascade to other components.

The AWS services that can help you build loosely-coupled applications include:

1- Amazon Simple Queue Service (Amazon SQS): Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.

2- Amazon EventBridge (also called Amazon CloudWatch Events): Amazon EventBridge is a serverless event bus service that makes it easy for you to build event-driven application architectures. Amazon EventBridge helps you accelerate modernizing and re-orchestrating your architecture with decoupled services and applications. With EventBridge, you can speed up your organization’s development process by allowing teams to iterate on features without explicit dependencies between systems.

3- Amazon SNS: Amazon SNS is a publish/subscribe messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Both Amazon SNS and Amazon EventBridge can be used to implement the publish-subscribe pattern. Amazon EventBridge includes direct integrations with software as a service (SaaS) applications and other AWS services. It’s ideal for publish-subscribe use cases involving these types of integrations.

The other options are incorrect:

“It helps AWS customers reduce Privileged Access to AWS resources” is incorrect. This statement is related to the “Principle of Lease Privilege”, not “Loose Coupling”. Loose Coupling does not deal with access privileges.

“It allows for Cross-Region Replication” is incorrect. There is no relation between Cross-Region Replication and Loose Coupling. Cross-Region Replication (CRR) is an Amazon S3 feature that enables customers to replicate data across different AWS Regions; to minimize latency for global users and\or meet compliance requirements.

“It eliminates the need for change management” is incorrect. Loose Coupling does not eliminate the need for Change Management. Change Management is the process responsible for controlling the Lifecycle of all Changes made in an AWS account. The primary objective of Change Management is to enable beneficial changes to be made, with minimum disruption to IT Services. An erroneous configuration or misstep in a process can frequently lead to infrastructure or service disruptions. Creating and implementing a change management strategy will help reduce the risk of failure by monitoring all changes and rolling back failed changes.

Additional information:

AWS Config and AWS CloudTrail are change management tools that help AWS customers audit and monitor all resource and configuration changes in their AWS environment. AWS Config provides information about the changes made to a resource, and AWS CloudTrail provides information about who made those changes. These capabilities enable customers to discover any misconfigurations, fix them, and protect their workloads from failures.

24
Q

Which AWS service provides the EASIEST way to set up and manage a secure, well-architected, multi-account AWS environment?

  • Amazon Macie
  • AWS Control Tower
  • AWS Security Hub
  • AWS System Manger Patch Manager
A

AWS Control Tower

Explanation
You can use AWS Control Tower or AWS Organizations to set up and manage a secure, well-architected, multi-account AWS environment. With AWS Organizations, you build your environment from the ground up, which requires more upfront effort with full control over every aspect of your environment. AWS Control Tower provides built-in best-practice blueprints, guardrails, and automation features that help you build your multi-account environment quickly and easily.

If you’re a customer with multiple AWS accounts and teams, cloud setup and governance can be complex and time-consuming, slowing down the very innovation you’re trying to speed up. AWS Control Tower provides the easiest way to set up a secure, multi-account AWS environment. For ongoing governance, you can enable pre-configured guardrails, which are clearly defined rules for security, operations, and compliance. Guardrails help prevent deployment of resources that don’t conform to policies and continuously monitor deployed resources for nonconformance. The AWS Control Tower dashboard provides centralized visibility into the multi-account AWS environment, including accounts provisioned, guardrails enabled, and the compliance status of accounts.

Q: What is the difference between AWS Control Tower and AWS Organizations?

AWS Control Tower creates an abstraction or orchestration layer that combines and integrates the capabilities of several other AWS services, including AWS Organizations, AWS Single Sign-on, and AWS Service Catalog. AWS Control Tower offers an abstracted, automated, and prescriptive experience on top of AWS Organizations. It automatically sets up AWS Organizations as the underlying AWS service to organize accounts and implements preventive guardrails using service control policies (SCPs).

The other options are incorrect:

“AWS Security Hub” is incorrect. AWS Security Hub aggregates, organizes, and prioritizes security alerts and findings from multiple AWS security services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, and supported third-party partners to help you analyze your security trends and identify the highest priority security issues.

“AWS Systems Manager Patch Manager” is incorrect. AWS Systems Manager helps you select and deploy operating system and software patches automatically across large groups of Amazon EC2 or on-premises instances. Through patch baselines, you can set rules to auto-approve select categories of patches to be installed, such as operating system or high severity patches. Systems Manager helps ensure that your software is up-to-date and meets your compliance policies.

“Amazon Macie” is incorrect. Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property, and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved.

25
Q

Who is responsible for scaling a DynamoDB database in the AWS Shared Responsibility Model?

  • AWS
  • Your development team
  • Your internal DevOpsteam
  • Your security team
A

AWS

Explanation
DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS so that they do not have to worry about hardware provisioning, setup and configuration, throughput capacity planning, replication, software patching, or cluster scaling.

26
Q

Which of the following services enables you to easily generate and use your own encryption keys in the AWS Cloud?

  • AWS CloudHSM
  • AWS WAF
  • AWS Certificate Manager
  • AWS Shield
A

AWS CloudHSM

Explanation
AWS CloudHSM is a cloud-based Hardware Security Module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud.

The other options are incorrect:

“AWS Certificate Manager” is incorrect. AWS Certificate Manager is a service that lets you provision, manage, and deploy (SSL/TLS) certificates for use with AWS services and your internal connected resources.

“AWS Shield” is incorrect. AWS Shield is a managed Distributed Denial of Service (DDoS) protection service.

“AWS WAF” is incorrect. AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.

27
Q

What does the AWS “Business” support plan provide? (Choose TWO)

  • Less than 15 minutes response-time support if your business critical system goes down
  • Access to the full Trusted Advisor Checks
  • Proactive Technical Account Management
  • Cosnultative view and guidance based on you applications
  • AWS health API
A

Access to the full Trusted Advisor Checks
AWS health API

Explanation
AWS recommend Business Support if you have production workloads on AWS and want 24x7 access to technical support and architectural guidance in the context of your specific use-cases.

In addition to what is available with Basic Support, Business Support provides:

1- AWS Trusted Advisor - Access to the full set of Trusted Advisor checks and guidance to provision your resources following best practices to help reduce costs, increase performance and fault tolerance, and improve security.

2- AWS Health Dashboard - A personalized view of the health of AWS services, and alerts when your resources are impacted. Also includes the AWS Health API for integration with your existing management systems. AWS Health API is available only for AWS customers who have a Business, Enterprise On-Ramp, or Enterprise support plan.

3- Enhanced Technical Support – 24x7 access to Cloud Support Engineers via phone, chat, and email. You can have an unlimited number of contacts that can open an unlimited amount of cases.

Response times are as follows:

  • General Guidance - < 24 hours
  • System Impaired - < 12 hours
  • Production System Impaired - < 4 hours
  • Production System Down - < 1 hour

4- Architecture Support – Contextual guidance on how services fit together to meet your specific use-case, workload, or application.

5- AWS Support API - Programmatic access to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status.

6- Access to Proactive Support Programs – Ability to purchase Infrastructure Event Management for an additional fee. This provides Architecture and scaling guidance, and real-time operational support during the preparation and execution of planned events, product launches, and migrations.

The other options are incorrect:

“Consultative review and guidance based on your applications” is incorrect. AWS support plans differ on what level of architectural support each of them provides.

  • The AWS Enterprise On-Ramp and Enterprise support plans provide consultative review and guidance based on your applications.
  • The AWS Business Support provides contextual architectural guidance on what AWS products, features, and services to use to best support your specific use-case, workload, or application.
  • The AWS Developer Support provides general architectural guidance on how to use AWS products, features, and services together to best support your specific use-case, workload, or application.

“Less than 15 minutes response-time support if your business critical system goes down” is incorrect. The AWS Business support plan provide 1-hour response time support if your production system goes down. If you want less than 15-minutes response time, you must subscribe to the AWS Enterprise or Enterprise On-Ramp support plan.

“Proactive Technical Account Management” is incorrect. Proactive Technical Account Management is only available for AWS customers who have an Enterprise On-Ramp or Enterprise support plan. A Technical Account Manager (TAM) is your designated technical point of contact who provides advocacy and guidance to help plan and build solutions using best practices, coordinate access to subject matter experts and product teams, and proactively keep your AWS environment operationally healthy.

28
Q

You have multiple standalone AWS accounts and you want to decrease your AWS monthly charges. What should you do?

  • Add the accounts to an AWS Organization and use Consolidated Billing
  • Track the AWS charges that are incurred by the member accounts
  • Try to remove the unnecessary AWS accounts
  • Enable AWS tiered pricing before provisioning resources
A

Add the accounts to an AWS Organization and use Consolidated Billing

Explanation
Consolidated billing has the following benefits:

1- One bill – You get one bill for multiple accounts.

2- Easy tracking – You can track each account’s charges, and download the cost data in .csv format.

3- Combined usage – If you have multiple standalone accounts, your charges might decrease if you add the accounts to an organization. AWS combines usage from all accounts in the organization to qualify you for volume pricing discounts.

4- No extra fee – Consolidated billing is offered at no additional cost.

The other options are incorrect:

“Try to remove unnecessary AWS accounts” is incorrect. Removing accounts or resources depends on your needs.

“Track the AWS charges that are incurred by the member accounts” is incorrect. Tracking the AWS charges will not decrease your charges.

“Enable AWS tiered-pricing before provisioning resources” is incorrect. AWS tiered-pricing is applied for every AWS account regardless of whether it is part of an organization or not. With AWS, you can get volume-based discounts and realize important savings as your usage increases. For services such as S3 and data transfer OUT from EC2, pricing is tiered, meaning the more you use, the less you pay per GB. But if you have multiple AWS accounts, you can achieve even more discounts by adding them to an Organization and enable consolidated billing (because in that case, AWS will treat all the accounts as one account).

29
Q

You are working as a site reliability engineer (SRE) in an AWS environment, which of the following services helps monitor your applications?

  • Amazon CloudWatch
  • Amazon CloudHSM
  • Amazon Elastic MapReduce
  • Amazon CloudSearch
A

Amazon CloudWatch

Explanation
Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications running on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources.

The other options are incorrect:

Amazon Elastic MapReduce is incorrect. Amazon Elastic MapReduce (Amazon EMR) provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances.

Amazon CloudSearch is incorrect. Amazon CloudSearch is used to set up, manage, and scale a search solution for your website or application.

AWS CloudHSM is incorrect. AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud.

References:
https://aws.amazon.com/cloudwatch/

30
Q

Which of the following procedures can reduce latency when your end users are retrieving data? (Choose TWO)

  • Replicate the media assets to at least 2 Availability Zones
  • Store media assets in the region closest to your end users
  • Store media assets in S3 and use CloudFront to distribute these assets
  • Store media assets on an additionan EBS volume and increase the capacity of your server
  • Reuce the size of media assets using the Amazon Elastic Transcoder
A

Store media assets in S3 and use CloudFront to distribute these assets
Store media assets on an additionan EBS volume and increase the capacity of your

Explanation
Amazon CloudFront is a fast Content Delivery Network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds.

CloudFront is the best solution to reduce latency if you have users from different places around the world.

         Storing media assets in a region closer to the end-users can help reduce latency for those users. This is because these assets will travel a shorter distance over the network.

The other options are incorrect:

“Store media assets on an additional EBS volume and increase the capacity of your server” is incorrect. Storing media assets on an additional EBS volume or increasing the capacity of your server does nothing with regards to latency. The question does not mention that you are facing heavy workloads, so increasing the capacity of your EC2 instances to more powerful types will be a waste of money in this scenario.

“Replicate media assets to at least two availability zones” is incorrect. Replicating your media assets on at least two availability zones may improve the availability of your application but will not reduce latency especially if these AZs exist in the same region.

“Reduce the size of media assets using the Amazon Elastic Transcoder” is incorrect. Amazon Elastic Transcoder lets you convert (or “transcode”) media files from their source format into versions that will playback on mobile devices, tablets, web browsers, and connected televisions.

31
Q

How does AWS help customers achieve compliance in the cloud?

  • AWS applies the most common Cloud security standards, and is responsible for complying with the customer applicable laws and regulations
  • Many AWS services are assessed regularly with local laws and regulations
  • It’s not possible to meet regulatory compliance requirements in the CLoud
  • AWS has many common assurance certification such as ISO 9001 and HIPAA
A

AWS has many common assurance certification such as ISO 9001 and HIPAA

Explanation
AWS environments are continuously audited, and its infrastructure and services are approved to operate under several compliance standards and industry certifications across geographies and industries, including PCI DSS, ISO 2700, ISO 9001, and HIPAA. You can use these certifications to validate the implementation and effectiveness of AWS security controls. For example, AWS companies that use AWS products and services to handle credit card information can rely on AWS technology infrastructure as they manage their PCI DSS compliance certification.

The other options are incorrect:

“AWS applies the most common Cloud security standards, and is responsible for complying with customers’ applicable laws and regulations” is incorrect. In all cases, customers operating in the cloud remain responsible for complying with applicable laws and regulations.

“Many AWS services are assessed regularly to comply with local laws and regulations” is incorrect. AWS services are assessed regularly to comply with common compliance standards NOT with local laws and regulations.

“It’s not possible to meet regulatory compliance requirements in the Cloud” is incorrect. AWS environments are continuously audited, and its infrastructure and services are approved to operate under several compliance standards and industry certifications across geographies and industries. For example, AWS enables covered entities and their business associates subject to the U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA) to use the secure AWS environment to process, maintain, and store protected health information.

32
Q

Which of the following Cloud Computing deployment models eliminates the need to run and maintain physical data centers?

  • Cloud
  • On-Premises
  • PaaS
  • IaaS
A

Cloud

Explanation
There are three Cloud Computing Deployment Models:

1- Cloud:

    A cloud-based application is fully deployed in the cloud and all parts of the application run in the cloud. This Cloud Computing deployment model eliminates the need to run and maintain physical data centers.

2- Hybrid:

   A hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing resources that are not located in the cloud (On-premises data centers).

3- On-premises:

   Deploying resources on-premises, using virtualization and resource management tools, is sometimes called “private cloud”. On-premises deployment does not provide many of the benefits of cloud computing but is sometimes sought for its ability to provide dedicated resources.

The other options are incorrect:

Iaas, PaaS, and SaaS are not deployment models. They represent the different use cases of Cloud Computing, and the different levels of control customers need over their IT resources.

IaaS is incorrect. Infrastructure as a Service, sometimes abbreviated as IaaS, contains the basic building blocks for cloud IT and typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. Infrastructure as a Service provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today.

PaaS is incorrect. Platform as a Service (PaaS) removes the need for organizations to manage the underlying infrastructure (usually hardware and operating systems) and allow you to focus on the deployment and management of your applications. This helps you be more efficient as you don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application.

SaaS - Software as a Service (SaaS) provides you with a completed product that is run and managed by the service provider. In most cases, people referring to Software as a Service are referring to end-user applications. With a SaaS offering you do not have to think about how the service is maintained or how the underlying infrastructure is managed; you only need to think about how you will use that particular piece software. A common example of a SaaS application is the web-based email where you can send and receive email without having to manage feature additions to the email product or maintaining the servers and operating systems that the email program is running on.

References:

https://aws.amazon.com/types-of-cloud-computing/

33
Q

Engineers are wasting a lot of time and effort managing batch computing software in traditional data centers. Which of the following AWS services allows them to easily run thousands of batch computing jobs?

  • Amazon EC2
  • Lambda@Edge
  • AWS Fargate
  • AWS Batch
A

** AWS Batch**

Explanation
AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory-optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems. AWS Batch plans, schedules, and executes your batch computing workloads across the full range of AWS compute services and features, such as Amazon EC2 and Spot Instances.

The other options are incorrect:

“Amazon EC2” is incorrect. Amazon EC2 can be used to run any number of batch processing jobs but you are responsible for installing and managing a batch computing software and creating the server clusters.

“AWS Fargate” is incorrect. AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). AWS Fargate allows customers to run containers without having to manage servers or clusters.

“Lambda@Edge” is incorrect. Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to your global end-users, which improves performance and reduces latency.