AWS Certified Cloud Practitioner Practice Exam (1) Flashcards

1
Q

Which of the following helps a customer view the Amazon EC2 billing activity for the past month?

1) AWS Cost & Usage Reports
2) AWS Systems Manager
3) AWS Budgets
4) AWS Pricing Calculator

A

AWS Cost & Usage Reports

The AWS Cost & Usage Report is your one-stop shop for accessing the most detailed information available about your AWS costs and usage.The AWS Cost & Usage Report lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the AWS feature that provides an additional level of security above the default authentication mechanism of usernames and passwords?

1) AWS MFA
2) Email verification
3) AWS KMS
4) Encrypted keys

A

AWS MFA

AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication code from their AWS MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for your AWS account settings and resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

An organization has decided to purchase an Amazon EC2 Reserved Instance (RI) for three years in order to reduce costs. It is possible that the application workloads could change during the reservation period.

What is the EC2 Reserved Instance (RI) type that will allow the company to exchange the purchased reserved instance for another reserved instance with higher computing power if they need to?

1) Standard RI
2) Elastic RI
3) Convertible RI
4) Premium RI

A

Convertible RI

When your needs change, you can exchange your Convertible Reserved Instances and continue to benefit from the reservation’s pricing discount. With Convertible RIs, you can exchange one or more Reserved Instances for another Reserved Instance with a different configuration, including instance family, operating system, and tenancy. There are no limits to how many times you perform an exchange, as long as the new Convertible Reserved Instance is of an equal or higher value than the original Convertible Reserved Instances that you are exchanging.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following is an example of horizontal scaling in the AWS Cloud?

1) Adding more RAM capacity to an EC2 instance
2) Increasing the compute capacity of a single EC2 instance to address the growing demands of an application
3) Replacing an existing EC2 instance with a larger, more powerful one
4) Adding more EC2 instances of the same size to handle an increase in traffic

A

Adding more EC2 instances of the same size to handle an increase in traffic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

As part of the Enterprise support plan, who is the primary point of contact for ongoing support needs?

1) AWS Identity and Access Management (IAM) user
2) AWS Countdown engineer
3) Technical Account Manager (TAM)
4) AWS Partner Network (APN)

A

Technical Account Manager (TAM)

For Enterprise-level customers, a TAM (Technical Account Manager) provides technical expertise for the full range of AWS services and obtains a detailed understanding of your use case and technology architecture. TAMs work with AWS Solution Architects to help you launch new projects and give best practices recommendations throughout the implementation life cycle. Your TAM is the primary point of contact for ongoing support needs, and you have a direct telephone line to your TAM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company is planning to host an educational website on AWS. Their video courses will be streamed all around the world. Which of the following AWS services will help achieve high transfer speeds?

1) Amazon Kinesis Video Streams
2) Amazon CloudFront
3) Amazon SNS
4) AWS CloudFormation

A

Amazon CloudFront

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

The use cases of Amazon CloudFront include:

1- Accelerate static website content delivery.

CloudFront can speed up the delivery of your static content (for example, images, style sheets, JavaScript, and so on) to viewers across the globe. By using CloudFront, you can take advantage of the AWS backbone network and CloudFront edge servers to give your viewers a fast, safe, and reliable experience when they visit your website.

2- Live & on-demand video streaming.
The Amazon CloudFront CDN offers multiple options for streaming your media – both pre-recorded files and live events – at sustained, high throughput required for 4K delivery to global viewers.

3- Security.

CloudFront integrates seamlessly with AWS Shield for Layer 3/4 DDoS mitigation and AWS WAF for Layer 7 protection.

4- Customizable content delivery with Lambda@Edge.

Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance and reduces latency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the AWS service that enables AWS architects to manage infrastructure as code?

1) Amazon EMR
2) AWS CloudFormation
3) AWS Config
4) Amazon SES

A

AWS CloudFormation

AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. You don’t need to individually create and configure AWS resources and figure out what’s dependent on what; AWS CloudFormation handles all that for you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which service provides object-level storage in AWS?

1) Amazon EFS
2) Amazon EBS
3) Amazon Instance Store
4) Amazon S3

A

Amazon S3

Amazon S3 is an object level storage built to store and retrieve any amount of data from anywhere – web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following must an IAM user provide to interact with AWS services using the AWS Command Line Interface (AWS CLI)?

1) User ID
2) Access keys
3) Secret token
4) User name and password

A

Access keys

Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests to AWS using the CLI or the SDK.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does AWS Health provide? (Choose TWO)

1) Personalized view of AWS service health
2) Health checks for Auto Scaling instances
3) A dashboard detailing vulnerabilities in your applications
4) Detailed troubleshooting guidance to address AWS events impacting your resources
5) Recommendations for Cost Optimization

A

1) Personalized view of AWS service health
4) Detailed troubleshooting guidance to address AWS events impacting your resources

AWS Health (previously AWS Personal Health Dashboard) is the single place to learn about the availability and operations of AWS services. You can view the overall status of all AWS services, and you can sign in to access a personalized view of the health of the specific services that are powering your workloads and applications. AWS Health proactively notifies you when AWS experiences any events that may affect you, helping provide quick visibility and guidance to minimize the impact of events in progress, and plan for any scheduled changes, such as AWS hardware maintenance.

The benefits of the AWS Health include:

  • A personalized View of Service Health: AWS Health gives you a personalized view of the status of the AWS services that power your applications, enabling you to quickly see when AWS is experiencing issues that may impact you. For example, in the event of a lost EBS volume associated with one of your EC2 instances, you would gain quick visibility into the status of the specific service you are using, helping save precious time troubleshooting to determine root cause.
  • Proactive Notifications: The dashboard also provides forward looking notifications, and you can set up alerts across multiple channels, including email and mobile notifications, so you receive timely and relevant information to help plan for scheduled changes that may affect you. In the event of AWS hardware maintenance activities that may impact one of your EC2 instances, for example, you would receive an alert with information to help you plan for, and proactively address any issues associated with the upcoming change.
  • Detailed Troubleshooting Guidance: When you get an alert, it includes remediation details and specific guidance to enable you to take immediate action to address AWS events impacting your resources. For example, in the event of an AWS hardware failure impacting one of your EBS volumes, your alert would include a list of your affected resources, a recommendation to restore your volume, and links to the steps to help you restore it from a snapshot. This targeted and actionable information reduces the time needed to resolve issues.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have noticed that several critical Amazon EC2 instances have been terminated. Which of the following AWS services would help you determine who took this action?

1) AWS Trusted Advisor
2) EC2 Instance Usage Report
3) AWS CloudTrail
4) Amazon Inspector

A

AWS CloudTrail

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In order to implement best practices when dealing with a “Single Point of Failure,” you should attempt to build as much automation as possible in both detecting and reacting to failure. Which of the following AWS services would help? (Choose TWO)

1) ELB
2) Amazon EC2
3) Auto Scaling
4) Amazon Athena
5) ECR

A

1) ELB
3) Auto Scaling

You should attempt to build as much automation as possible in both detecting and reacting to failure. You can use services like ELB and Amazon Route53 to configure health checks and mask failure by only routing traffic to healthy endpoints. In addition, Auto Scaling can be configured to automatically replace unhealthy nodes. You can also replace unhealthy nodes using the Amazon EC2 auto-recovery feature. It won’t be possible to predict every possible failure scenario on day one. Make sure you collect enough logs and metrics to understand normal system behavior. After you understand that, you will be able to set up alarms that trigger automated response or manual intervention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

According to the AWS Acceptable Use Policy, which of the following statements is true regarding penetration testing of EC2 instances?

1) Penetration testing is performed automatically by AWS to determine vulnerabilities in your AWS infrastructure
2) Penetration testing can be performed by the customer on their own instances without prior authorization from AWS
3) Penetration testing is not allowed in AWS
4) AWS customers are only allowed to perform penetration testing on services managed by AWS

A

Penetration testing can be performed by the customer on their own instances without prior authorization from AWS

AWS customers are allowed to carry out security assessments and penetration tests against their AWS infrastructure without prior approval for 15 services (no need to memorize these services):

1- Amazon EC2 instances, WAF, NAT Gateways, and Elastic Load Balancers.

2- Amazon RDS.

3- Amazon CloudFront.

4- Amazon Aurora.

5- Amazon API Gateways.

6- AWS Lambda and Lambda Edge functions.

7- Amazon Lightsail resources.

8- Amazon Elastic Beanstalk environments.

9- AWS AppSync

10- Amazon Elastic Container Service

11- AWS Fargate

12- Amazon Elasticsearch

13- Amazon FSx

14- Amazon Transit Gateway

15- S3 hosted applications (targeting S3 buckets is strictly prohibited)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does Amazon CloudFront use to distribute content to global users with low latency?

1) AWS Edge Locations
2) AWS Regions
3) AWS Global Accelerator
4) AWS Availability Zones

A

AWS Edge Locations

To deliver content to global end users with lower latency, Amazon CloudFront uses a global network of Edge Locations and Regional Edge Caches in multiple cities around the world. Amazon CloudFront uses this network to cache copies of your content close to your end-users. Amazon CloudFront ensures that end-user requests are served by the closest edge location. As a result, end-user requests travel a short distance, improving performance for your end-users, while reducing the load on the origin servers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In the AWS Shared responsibility Model, which of the following are the responsibility of the customer? (Choose TWO)

1) Patching the Network infrastructure
2) Disk disposal
3) Configuring network access rules
4) Controlling physical access to compute resources
5) Setting password complexity rules

A

3) Configuring network access rules
5) Setting password complexity rules

The customer is responsible for securing their network by configuring Security Groups, Network Access control Lists (Network ACLs), and Routing Tables. The customer is also responsible for setting a password policy on their AWS account that specifies the complexity and mandatory rotation periods for their IAM users’ passwords.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You want to run a questionnaire application for only one day (without interruption), which Amazon EC2 purchase option should you use?

1) Spot instances
2) On-demand instances
3) Dedicated instances
4) Reserved instances

A

On-demand instances

With On-Demand instances, you pay for compute capacity by the hour or second (minimum of 60 seconds) with no long-term commitments. You can increase or decrease your compute capacity depending on the demands of your application and only pay for what you use.

       The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs. On-Demand instances also remove the need to buy “safety net” capacity to handle periodic traffic spikes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which statement is true regarding the AWS Shared Responsibility Model?

1) Patching the guest OS is always the responsibility of AWS
2) Responsibilities vary depending on the services used
3) Security of the managed services is the responsibility of the customer
4) Security of the IaaS services is the responsibility of AWS

A

Responsibilities vary depending on the services used

Customers should be aware that their responsibilities may vary depending on the AWS services chosen. For example, when using Amazon EC2, you are responsible for applying operating system and application security patches regularly. However, such patches are applied automatically when using Amazon RDS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Adjusting compute capacity dynamically to reduce cost is an implementation of which AWS cloud best practice?

1) Parallelize tasks
2) Adopt monolithic architecture
3) Implement elasticity
4) Build security in every layer

A

Implement elasticity

In the traditional data center-based model of IT, once infrastructure is deployed, it typically runs whether it is needed or not, and all the capacity is paid for, regardless of how much it gets used. In the cloud, resources are elastic, meaning they can instantly grow ( to maintain performance) or shrink ( to reduce costs).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which of the following can be described as a global content delivery network (CDN) service?

1) AWS Regions
2) AWS Direct Connect
3) AWS VPN
4) Amazon CloudFront

A

Amazon CloudFront

Amazon CloudFront is a global content delivery network (CDN) service that gives businesses and web application developers an easy and cost effective way to distribute content (such as videos, data, applications, and APIs) with low latency and high data transfer speeds. Like other AWS services, Amazon CloudFront is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees. With CloudFront, your files are delivered to end-users using a global network of edge locations. CloudFront is integrated with other AWS services such as AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code close to your viewers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Which of the below options are related to the reliability of AWS? (Choose TWO)

1) Applying the principle of least privilege to all AWS resources
2) Automatically provisioning new resources to meet demand
3) All AWS services are considered Global Services, and this design helps customers serve their international users
4) Ability to recover quickly from failures
5) Providing compensation to customers if issues occur

A

2) Automatically provisioning new resources to meet demand
4) Ability to recover quickly from failures

The reliability term encompasses the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. The automatic provisioning of resources and the ability to recover from failures meet these criteria.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A developer is planning to build a two-tier web application that has a MySQL database layer. Which of the following AWS database services would provide automated backups for the application?

1) A MySQL database installed on an EC2 instance
2) Amazon Aurora
3) Amazon Neptune
4) Amazon DynamoDB

A

Amazon Aurora

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud. Amazon Aurora combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. It delivers up to five times the throughput of standard MySQL and up to three times the throughput of standard PostgreSQL. Amazon Aurora is designed to be compatible with MySQL and with PostgreSQL, so that existing applications and tools can run without requiring modification. It is available through Amazon Relational Database Service (RDS), freeing you from time-consuming administrative tasks such as provisioning, patching, backup, recovery, failure detection, and repair.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A company is deploying a new two-tier web application in AWS. Where should the most frequently accessed data be stored so that the application’s response time is optimal?

1) Amazon ElastiCache
2) Amazon FSx
3) AWS Storage Gateway
4) Amazon EBS volume

A

Amazon ElastiCache

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.

    The primary purpose of an in-memory data store is to provide ultrafast (submillisecond latency) and inexpensive access to copies of data. Querying a database is always slower and more expensive than locating a copy of that data in a cache. Some database queries are especially expensive to perform. An example is queries that involve joins across multiple tables or queries with intensive calculations. By caching (storing) such query results, you pay the price of the query only once. Then you can quickly retrieve the data multiple times without having to re-execute the query.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are the benefits of having infrastructure hosted in AWS? (Choose TWO)

1) Increasing speed and agility
2) All of the physical security and most of the data/network security are taken care of for you
3) There is no need to worry about security
4) Gaining complete control over the physical infrastructure
5) Operating applications on behalf of customers

A

1) Increasing speed and agility
2) All of the physical security and most of the data/network security are taken care of for you

All of the physical security are taken care of for you. Amazon data centers are surrounded by three physical layers of security. “Nothing can go in or out without setting off an alarm”. It’s important to keep bad guys out, but equally important to keep the data in which is why Amazon monitors incoming gear, tracking every disk that enters the facility. And “if it breaks we don’t return the disk for warranty. The only way a disk leaves our data center is when it’s confetti.”

           Most (not all) data and network security are taken care of for you. When we talk about the data/network security, AWS has a “shared responsibility model” where AWS and the customer share the responsibility of securing them. For example, the customer is responsible for creating rules to secure their network traffic using the security groups and is also responsible for protecting data with encryption.

         "Increasing speed and agility" is also a correct answer because in a cloud computing environment, new IT resources are only a click away, which means it requires less time to make those resources available to developers - from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the advantage of the AWS-recommended practice of “decoupling” applications?

1) Allows updates of any monolithic application quickly and easily
2) Reduces inter-dependencies so that failures do not impact other components of the application
3) Allows tracking of any API call made to any AWS service
4) Allows treating an application as a single, cohesive unit

A

Reduces inter-dependencies so that failures do not impact other components of the application

As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies—a change or a failure in one component should not cascade to other components. On the other hand if the components of an application are tightly coupled and one component fails, the entire application will also fail. Therefore when designing your application, you should always decouple its components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How can you view the distribution of AWS spending in one of your AWS accounts?

1) By contacting the AWS Finance team
2) By contacting the AWS Support team
3) By using AWS Cost Explorer
4) By using Amazon VPC console

A

By using AWS Cost Explorer

AWS Cost Explorer is a free tool that you can use to view your costs and usage. You can view data up to the last 12 months, forecast how much you are likely to spend for the next 12 months, and get recommendations for what Reserved Instances to purchase. You can use AWS Cost Explorer to see patterns in how much you spend on AWS resources over time, identify areas that need further inquiry, and see trends that you can use to understand your costs. You can also specify time ranges for the data, and view time data by day or by month.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

An organization has a large number of technical employees who operate their AWS Cloud infrastructure. What does AWS provide to help organize them into teams and then assign the appropriate permissions for each team?

1) IAM roles
2) AWS Organizations
3) IAM user groups
4) IAM users

A

IAM user groups

An IAM user group is a collection of IAM users that are managed as a unit. User groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users. For example, you could have a user group called Admins and give that user group the types of permissions that administrators typically need. Any user in that user group automatically has the permissions that are assigned to the user group. If a new user joins your organization and needs administrator privileges, you can assign the appropriate permissions by adding the user to that user group. Similarly, if a person changes jobs in your organization, instead of editing that user’s permissions, you can remove him or her from the old user groups and add him or her to the appropriate new user groups.

27
Q

A startup company is operating on limited funds and is extremely concerned about cost overruns. Which of the below options can be used to notify the company when their monthly AWS bill exceeds $2000? (Choose TWO)

1) Setup a CloudWatch billing alarm that triggers an SNS notification when the threshold is exceeded
2) Configure the Amazon Simple Email Service to send billing alerts to their email address on a daily basis
3) Configure the AWS Budgets Service to alert the company when the threshold is exceeded
4) Configure the Amazon Connect Service to alert the company when the threshold is exceeded
5) Configure AWS CloudTrail to automatically delete all AWS resources when the threshold is exceeded

A

1) Setup a CloudWatch billing alarm that triggers an SNS notification when the threshold is exceeded
3) Configure the AWS Budgets Service to alert the company when the threshold is exceeded

In CloudWatch, you can set up a billing alarm that triggers if your costs exceed a threshold that you set. This CloudWatch alarm can also be configured to trigger an SNS notification to your email address.

AWS Budgets is another AWS service that can be used in this scenario. AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. The difference between AWS Budgets and Amazon CloudWatch billing alarms is that Amazon CloudWatch billing alarms alert you only when your actual cost exceeds a certain threshold, while AWS Budgets can be configured to alert you when the actual or forecasted cost exceeds a certain threshold.

28
Q

What is the AWS database service that allows you to upload data structured in key-value format?

1) Amazon Redshift
2) Amazon Aurora
3) Amazon DynamoDB
4) Amazon RDS

A

Amazon DynamoDB

Amazon DynamoDB is a NoSQL database service. NoSQL databases are used for non-structured data that are typically stored in JSON-like, key-value documents.

29
Q

What is the AWS service that provides a virtual network dedicated to your AWS account?

1) AWS Dedicated Hosts
2) AWS VPN
3) Amazon VPC
4) AWS Subnets

A

Amazon VPC

Amazon Virtual Private Cloud (Amazon VPC) allows you to carve out a portion of the AWS Cloud that is dedicated to your AWS account. Amazon VPC enables you to launch AWS resources into a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

30
Q

Under the shared responsibility model, which of the following is the responsibility of AWS?

1) Configuring infrastructure devices
2) Server-side encryption
3) Filtering traffic with Security Groups
4) Client-side encryption

A

Configuring infrastructure devices

Under the shared responsibility model, AWS is responsible for the hardware and software that run AWS services. This includes patching the infrastructure software and configuring infrastructure devices. As a customer, you are responsible for implementing best practices for data encryption, patching guest operating system and applications, identity and access management, and network & firewall configurations.

31
Q

Which of the following does NOT belong to the AWS Cloud Computing models?

1) Infrastructure as a Service (IaaS)
2) Platform as a Service (PaaS)
3) Networking as a Service (NaaS)
4) Software as a Service (SaaS)

A

Networking as a Service (NaaS)

32
Q

AWS allows users to manage their resources using a web based user interface. What is the name of this interface?

1) AWS API
2) AWS SDK
3) AWS CLI
4) AWS Management Console

A

AWS Management Console

The AWS Management Console allows you to access and manage Amazon Web Services through a simple and intuitive web-based user interface. You can also use the AWS Console mobile app to quickly view resources on the go.

33
Q

A company is concerned that they are spending money on underutilized compute resources in AWS. Which AWS feature will help ensure that their applications are automatically adding/removing EC2 compute capacity to closely match the required demand?

1) AWS Cost Explorer
2) AWS Elastic Load Balancer
3) AWS Budgets
4) AWS Auto Scaling

A

AWS Auto Scaling

AWS Auto Scaling is the feature that automates the process of adding/removing server capacity (based on demand). Autoscaling allows you to reduce your costs by automatically turning off resources that aren’t in use. On the other hand, Autoscaling ensures that your application runs effectively by provisioning more server capacity if required.

34
Q

Hundreds of thousands of DDoS attacks are recorded every month worldwide. What service does AWS provide to help protect AWS Customers from these attacks? (Choose TWO)

1) AWS Config
2) AWS Shield
3) Amazon Cognito
4) AWS KMS
5) AWS WAF

A

2) AWS Shield
5) AWS WAF

AWS provides flexible infrastructure and services that help customers implement strong DDoS mitigations and create highly available application architectures that follow AWS Best Practices for DDoS Resiliency. These include services such as Amazon Route 53, Amazon CloudFront, Elastic Load Balancing, and AWS WAF to control and absorb traffic, and deflect unwanted requests. These services integrate with AWS Shield, a managed DDoS protection service that provides always-on detection and automatic inline mitigations to safeguard web applications running on AWS.

35
Q

You are working on a project that involves creating thumbnails of millions of images. Consistent uptime is not an issue, and continuous processing is not required. Which EC2 buying option would be the most cost-effective?

1) Spot Instances
2) Dedicated Instances
3) On-demand Instances
4) Reserved Instances

A

Spot Instances

Spot instances provide a discount (up to 90%) off the On-Demand price. The Spot price is determined by long-term trends in supply and demand for EC2 spare capacity. If the Spot price exceeds the maximum price you specify for a given instance or if capacity is no longer available, your instance will automatically be interrupted.

     Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if you don't mind if your applications get interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.
36
Q

A company has developed an eCommerce web application in AWS. What should they do to ensure that the application has the highest level of availability?

1) Deploy the application across multiple Regions and Availability Zones
2) Deploy the application across multiple VPC’s and subnets
3) Deploy the application across multiple Availability Zones and subnets
4) Deploy the application across multiple Availability Zones and Edge locations

A

Deploy the application across multiple Regions and Availability Zones

The AWS Global infrastructure is built around Regions and Availability Zones (AZs). Each AWS Region is a separate geographic area. Each AWS Region has multiple, isolated locations known as Availability Zones. Availability Zones in a region are connected with low latency, high throughput, and highly redundant networking. These Availability Zones offer AWS customers an easier and more effective way to design and operate applications and databases, making them more highly available, fault tolerant, and scalable than traditional single datacenter infrastructures or multi-datacenter infrastructures.

      In addition to replicating applications and data across multiple data centers in the same Region using Availability Zones, you can also choose to increase redundancy and fault tolerance further by replicating data between geographic Regions (especially if you are serving customers from all over the world). You can do so using both private, high speed networking and public internet connections to provide an additional layer of business continuity, or to provide low latency access across the globe.
37
Q

What does AWS Snowball provide? (Choose TWO)

1) Secure transfer of large amounts of data into and out of the AWS Cloud
2) A catalog of third-party software solutions that customers need to build solutions and run their businesses
3) An online data transfer service that allows you to move extremely large amounts of data to AWS
4) A hybrid cloud storage between on-premises environments and the AWS Cloud
5) Built-in computing capabilities that allow customers to process data locally

A

1) Secure transfer of large amounts of data into and out of the AWS Cloud
5) Built-in computing capabilities that allow customers to process data locally

AWS Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers, including high network costs, long transfer times, and security concerns. AWS Customers use Snowball to migrate analytics data, genomics data, video libraries, image repositories, and backups. Transferring data with Snowball is simple, fast, secure, and can cost as little as one-fifth the cost of using high-speed internet.

   Additionally, With AWS Snowball, you can access the compute power of the AWS Cloud locally and cost-effectively in places where connecting to the internet might not be an option. AWS Snowball is a perfect choice if you need to run computing in rugged, austere, mobile, or disconnected (or intermittently connected) environments.

   With AWS Snowball, you have the choice of two devices, Snowball Edge Compute Optimized with more computing capabilities, suited for higher performance workloads, or Snowball Edge Storage Optimized with more storage, which is suited for large-scale data migrations and capacity-oriented workloads.

   Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It is also a good fit for running general purpose analysis such as IoT data aggregation and transformation.

   Snowball Edge Compute Optimized is the optimal choice if you need powerful compute and high-speed storage for data processing. Examples include high-resolution video processing, advanced IoT data analytics, and real-time optimization of machine learning models.
38
Q

What do you gain from setting up consolidated billing for five different AWS accounts under another management account?

1) The consolidated billing feature is just for organizational purposes
2) AWS services’ costs will be reduced to half the original price
3) Each AWS account gets five times the free-tier services capacity
4) Each AWS account gets volume discounts

A

Each AWS account gets volume discounts

AWS consolidated billing enables an organization to consolidate payments for multiple AWS accounts within a single organization by making a single paying account. For billing purposes, AWS treats all the accounts on the consolidated bill as one account. Some services, such as Amazon EC2 and Amazon S3 have volume pricing tiers across certain usage dimensions that give the user lower prices when they use the service more. For example if you use 50 TB in each account you would normally be charged $23 503 (because they are 3 different accounts), But with consolidated billing you would be charged $2350+$2250*2 (because they are treated as one account) which means that you would save $100.

HOW IT WORKS

   After you create an organization and verify that you own the email address associated with the management account, you can invite existing AWS accounts to join your organization. When you invite an account, the AWS Organizations service sends an invitation to the account owner, who decides whether to accept or decline the invitation. If they accept, their account becomes a member of that organization.

   At the moment an account accepts the invitation to join an organization, the management account of the organization becomes liable for all charges accrued by the new member account. The payment method attached to the member account is no longer used. Instead, the payment method attached to the management account of the organization pays for all charges accrued by the member account.
39
Q

What does AWS provide to deploy popular technologies - such as IBM MQ - on AWS with the least amount of effort and time?

1) Amazon Aurora
2) AWS Partner Solutions (AWS Quick Starts)
3) AWS CloudShell
4) Amazon CloudWatch

A

AWS Partner Solutions (AWS Quick Starts)

AWS Partner Solutions (formerly AWS Quick Starts) outline the architectures for popular enterprise solutions on AWS and provide AWS CloudFormation templates to automate their deployment. Each Partner Solution launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability.

    AWS Partner Solutions are automated reference deployments built by AWS solutions architects and partners to help you deploy popular technologies on AWS, based on AWS best practices. These accelerators reduce hundreds of manual installation and configuration procedures into just a few steps, so you can build your production environment quickly and start using it immediately.
40
Q

Select TWO examples of the AWS shared controls.

1) Configuration Management
2) VPC Management
3) IAM Management
4) Data Center operations
5) Patch Management

A

1) Configuration Management
5) Patch Management

Shared Controls are controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services.

Examples include:

** Patch Management – AWS is responsible for patching the underlying hosts and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.

** Configuration Management – AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.

** Awareness & Training - AWS trains AWS employees, but a customer must train their own employees.

Additional information:

A computer on which AWS runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. AWS drives the concept of virtualization by allowing the physical host machine to operate multiple virtual machines as guests (for multiple customers) to help maximize the effective use of computing resources such as memory, network bandwidth and CPU cycles.

41
Q

The identification process of an online financial services company requires that new users must complete an online interview with their security team. The completed recorded interviews are only required in the event of a legal issue or a regulatory compliance breach. What is the most cost-effective service to store the recorded videos?

1) Amazon EBS
2) Amazon S3 Glacier Deep Archive
3) S3 Intelligent-Tiering
4) AWS Marketplace

A

Amazon S3 Glacier Deep Archive

Amazon S3 Glacier Deep Archive is an extremely low-cost storage service that provides secure, durable, and flexible storage for long-term data backup and archival. With Amazon S3 Glacier Deep Archive, customers can reliably store their data for as little as $1 per terabyte per month, a significant savings compared to on-premises solutions. Amazon Glacier enables customers to offload the administrative burdens of operating and scaling storage to AWS, so that they don’t have to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and repair, or time-consuming hardware migrations.

42
Q

You have set up consolidated billing for several AWS accounts. One of the accounts has purchased a number of reserved instances for 3 years. Which of the following is true regarding this scenario?

1) The Reserved Instance discounts can only be shared with the management account
2) All accounts can receive the hourly cost benefit of the Reserved Instances
3) There are no cost benefits from using consolidated billing; It is for informational purposes only
4) The purchased instances will have better performance than On-demand instances

A

All accounts can receive the hourly cost benefit of the Reserved Instances

For billing purposes, the consolidated billing feature of AWS Organizations treats all the accounts in the organization as one account. This means that all accounts in the organization can receive the hourly cost benefit of Reserved Instances that are purchased by any other account. For example, Suppose that Fiona and John each have an account in an organization. Fiona has five Reserved Instances of the same type, and John has none. During one particular hour, Fiona uses three instances and John uses six, for a total of nine instances on the organization’s consolidated bill. AWS bills five instances as Reserved Instances, and the remaining four instances as On-demand instances.

43
Q

You have AWS Basic support, and you have discovered that some AWS resources are being used maliciously, and those resources could potentially compromise your data. What should you do?

1) Contact the AWS Professional Services team
2) Contact the AWS Customer Service team
3) Contact the AWS Abuse team
4) Contact the AWS Concierge team

A

Contact the AWS Abuse team

The AWS Abuse team can assist you when AWS resources are being used to engage in the following types of abusive behavior:

I. Spam: You are receiving unwanted emails from an AWS-owned IP address, or AWS resources are being used to spam websites or forums.

II. Port scanning: Your logs show that one or more AWS-owned IP addresses are sending packets to multiple ports on your server, and you believe this is an attempt to discover unsecured ports.

III. Denial of service attacks (DOS): Your logs show that one or more AWS-owned IP addresses are being used to flood ports on your resources with packets, and you believe this is an attempt to overwhelm or crash your server or software running on your server.

IV. Intrusion attempts: Your logs show that one or more AWS-owned IP addresses are being used to attempt to log in to your resources.

V. Hosting objectionable or copyrighted content: You have evidence that AWS resources are being used to host or distribute illegal content or distribute copyrighted content without the consent of the copyright holder.

VI. Distributing malware: You have evidence that AWS resources are being used to distribute software that was knowingly created to compromise or cause harm to computers or machines on which it is installed.

Note: Anyone can report abuse of AWS resources, not just AWS customers.

44
Q

You work as an on-premises MySQL DBA. The work of database configuration, backups, patching, and DR can be time-consuming and repetitive. Your company has decided to migrate to the AWS Cloud. Which of the following can help save time on database maintenance so you can focus on data architecture and performance?

1) Amazon RDS
2) Amazon CloudWatch
3) Amazon Redshift
4) Amazon DynamoDB

A

Amazon RDS

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizable capacity while automating time-consuming administration tasks such as hardware provisioning, operating system maintenance, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.

    Amazon RDS can be used to host Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, IBM Db2, and Microsoft SQL Server databases.
45
Q

Which of the following is NOT correct regarding Amazon EC2 On-demand instances?

1) The on-demand instances follow the AWS pay-as-you-go pricing model
2) When using on-demand Linux instances, you are charged per second based on an hourly rate
3) With on-demand instances, no longer-term commitments or upfront payments are needed
4) You have to pay a start-up fee when launching a new instance for the first time

A

You have to pay a start-up fee when launching a new instance for the first time

There are no startup or termination fees associated with Amazon EC2.

46
Q

One of the most important AWS best-practices to follow is the cloud architecture principle of elasticity. How does this principle improve your architecture’s design?

1) By automatically scaling your AWS resources using an Elastic Load Balancer
2) By automatically scaling your on-premises resources based on changes in demand
3) By automatically provisioning the required AWS resources based on changes in demand
4) By reducing interdependencies between application components wherever possible

A

By automatically provisioning the required AWS resources based on changes in demand

Before cloud computing, you had to overprovision infrastructure to ensure you had enough capacity to handle your business operations at the peak level of activity. Now, you can provision the amount of resources that you actually need, knowing you can instantly scale up or down with the needs of your business. This reduces costs and improves your ability to meet your users’ demands.

          The concept of Elasticity involves the ability of a service to scale its resources out or in (up or down) based on changes in demand. For example, Amazon EC2 Autoscaling can help automate the process of adding or removing Amazon EC2 instances as demand increases or decreases.
47
Q

A global company with a large number of AWS accounts is seeking a way in which they can centrally manage billing and security policies across all accounts. Which AWS Service will assist them in meeting these goals?

1) AWS Config
2) AWS Trusted Advisor
3) AWS Organizations
4) IAM User Groups

A

AWS Organizations

AWS Organizations helps customers centrally govern their environments as they grow and scale their workloads on AWS. Whether customers are a growing startup or a large enterprise, Organizations helps them to centrally manage billing; control access, compliance, and security; and share resources across their AWS accounts.

AWS Organizations has the following benefits:

1) Centrally manage access polices across multiple AWS accounts.

2) Automate AWS account creation and management.

3) Control access to AWS services.

4) Consolidate billing across multiple AWS accounts.

5) Configure AWS services across multiple accounts.

6) Share common resources across accounts.

48
Q

Which of the following is not a benefit of Amazon S3? (Choose TWO)

1) Amazon S3 can run any type of application or backend system
2) Amazon S3 can be scaled manually to store and retrieve any amount of data from anywhere
3) Amazon S3 provides 99.999999999% (11 9’s) of data durability
4) Amazon S3 stores any number of objects, but with object size limits
5) Amazon S3 provides unlimited storage for any type of data

A

1) Amazon S3 can run any type of application or backend system
2) Amazon S3 can be scaled manually to store and retrieve any amount of data from anywhere

“Amazon S3 can run any type of application or backend system” is not a benefit of S3 and thus is a correct answer. Amazon S3 is a storage service not a compute service.

“Amazon S3 can be scaled manually to store and retrieve any amount of data from anywhere” is not a benefit of S3 and thus is a correct answer. Amazon S3 scales automatically to store and retrieve any amount of data from anywhere.

     Companies today need the ability to simply and securely collect, store, and analyze their data at a massive scale. Amazon S3 is object storage built to store and retrieve any amount of data from anywhere – web sites and mobile apps, corporate applications, and data from IoT sensors or devices.  It’s a simple storage service that offers highly available, and infinitely scalable data storage infrastructure at very low costs. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry. S3 provides comprehensive security and compliance capabilities that meet even the most stringent regulatory requirements. It gives customers flexibility in the way they manage data for cost optimization, access control, and compliance. S3 provides query-in-place functionality, allowing you to run powerful analytics directly on your data at rest in S3. And Amazon S3 is the most supported cloud storage service available, with integration from the largest community of third-party solutions, systems integrator partners, and other AWS services.

   Amazon S3 stores any number of objects, but each object does have a size limitation. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes.
49
Q

A company has an AWS Enterprise Support plan. They want quick and efficient guidance with their billing and account inquiries. Which of the following should the company use?

1) AWS Health
2) AWS Support Concierge
3) AWS Customer Service
4) AWS Operations Support

A

AWS Support Concierge

Included as part of the Enterprise Support plan, the Support Concierge Team are AWS billing and account experts that specialize in working with enterprise accounts. The Concierge team will quickly and efficiently assist you with your billing and account inquiries, and work with you to help implement billing and account best practices so that you can focus on running your business.

Support Concierge service includes:

** 24 x7 access to AWS billing and account inquires.

** Guidance and best practices for billing allocation, reporting, consolidation of accounts, and root-level account security.

** Access to Enterprise account specialists for payment inquiries, training on specific cost reporting, assistance with service limits, and facilitating bulk purchases.

50
Q

Your company has a data store application that requires access to a NoSQL database. Which AWS database offering would meet this requirement?

1) Amazon Aurora
2) Amazon Redshift
3) Amazon Elastic Block Store
4) Amazon DynamoDB

A

Amazon DynamoDB

Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity, makes it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications.

51
Q

Which S3 storage class is best for data with unpredictable access patterns?

1) Amazon S3 Intelligent-Tiering
2) Amazon S3 Standard
3) Amazon S3 Standard-Infrequent Access
4) Amazon S3 Glacier Flexible Retrieval

A

Amazon S3 Intelligent-Tiering

The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. For a small monthly monitoring and automation fee per object, Amazon S3 monitors access patterns of the objects in S3 Intelligent-Tiering, and moves the ones that have not been accessed for 30 consecutive days to the infrequent access tier. If an object in the infrequent access tier is accessed, it is automatically moved back to the frequent access tier. There are no retrieval fees when using the S3 Intelligent-Tiering storage class, and no additional tiering fees when objects are moved between access tiers. It is the ideal storage class for long-lived data with access patterns that are unknown or unpredictable.

52
Q

Which of the following services allows customers to manage their agreements with AWS?

1) AWS Systems Manager
2) AWS Certificate Manager
3) AWS Organizations
4) AWS Artifact

A

AWS Artifact

AWS Artifact is a self-service audit artifact retrieval portal that provides customers with on-demand access to AWS’ compliance documentation and AWS agreements. You can use AWS Artifact Agreements to review, accept, and track the status of AWS agreements such as the Business Associate Addendum (BAA).

Additional information:

      You can also use AWS Artifact Reports to download AWS security and compliance documents, such as AWS ISO certifications, Payment Card Industry (PCI), and System and Organization Control (SOC) reports.
53
Q

A Japanese company hosts their applications on Amazon EC2 instances in the Tokyo Region. The company has opened new branches in the United States, and the US users are complaining of high latency. What can the company do to reduce latency for the users in the US while minimizing costs?

1) Applying the Amazon Connect latency-based routing policy
2) Deploying new Amazon EC2 instances in a Region located in the US
3) Registering a new US domain name to serve the users in the US
4) Building a new data center in the US and implementing a hybrid model

A

Deploying new Amazon EC2 instances in a Region located in the US

The only way to reduce latency for the US users is to provision new Amazon EC2 instances in a Region closer to or in the US, OR by using Amazon CloudFront to cache copies of the content in edge locations close to the US users. In both cases, user requests will travel a shorter distance over the network, and the performance will improve.

54
Q

What should you do in order to keep the data on EBS volumes safe? (Choose TWO)

1) Store a backup daily in an external drive
2) Create EBS snapshots
3) Regularly update firmware on EBS devices
4) Prevent any unauthorized access to AWS data centers
5) Ensure that EBS data is encrypted at rest

A

2) Create EBS snapshots
5) Ensure that EBS data is encrypted at rest

Creating snapshots of EBS Volumes can help ensure that you have a backup of your EBS volumes just in case any issues arise. You can use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation, retention, and deletion of EBS snapshots.

Automating snapshot management with Amazon DLM helps you to:

  • Protect valuable data by enforcing a regular backup schedule.
  • Retain backups as required by auditors or internal compliance.
  • Reduce storage costs by deleting outdated backups.
  • Create disaster recovery backup policies that back up data to isolated accounts.
     Amazon EBS encryption offers a straight-forward encryption solution for your EBS resources that doesn't require you to build, maintain, and secure your own key management infrastructure. Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage.
55
Q

Which of the below is a best-practice when designing solutions on AWS?

1) Invest heavily in architecting your environment, as it is not easy to change your design later
2) Provision a large compute capacity to handle any spikes in load
3) Use AWS reservations to reduce costs when testing your production environment
4) Automate wherever possible to make architectural experimentation easier

A

Automate wherever possible to make architectural experimentation easier

The Well-Architected Framework identifies a set of general design principles to facilitate good design in the cloud:

1- Stop guessing your capacity needs: Eliminate guessing about your infrastructure capacity needs. When you make a capacity decision before you deploy a system, you might end up sitting on expensive idle resources or dealing with the performance implications of limited capacity. With cloud computing, these problems can go away. You can use as much or as little capacity as you need, and scale up and down automatically.

2- Test systems at production scale: In the cloud, you can create a production-scale test environment on demand, complete your testing, and then decommission the resources. Because you only pay for the test environment when it’s running, you can simulate your live environment for a fraction of the cost of testing on premises.

3- Automate to make architectural experimentation easier: Automation allows you to create and replicate your systems at low cost and avoid the expense of manual effort. You can track changes to your automation, audit the impact, and revert to previous parameters when necessary.

4- Allow for evolutionary architectures: Allow for evolutionary architectures. In a traditional environment, architectural decisions are often implemented as static, one-time events, with a few major versions of a system during its lifetime. As a business and its context continue to change, these initial decisions might hinder the system’s ability to deliver changing business requirements. In the cloud, the capability to automate and test on demand lowers the risk of impact from design changes. This allows systems to evolve over time so that businesses can take advantage of innovations as a standard practice.

5- Drive architectures using data: In the cloud you can collect data on how your architectural choices affect the behavior of your workload. This lets you make fact-based decisions on how to improve your workload. Your cloud infrastructure is code, so you can use that data to inform your architecture choices and improvements over time.

6- Improve through game days: Test how your architecture and processes perform by regularly scheduling game days to simulate events in production. This will help you understand where improvements can be made and can help develop organizational experience in dealing with events.

56
Q

The principle “design for failure and nothing will fail” is very important when designing your AWS Cloud architecture. Which of the following would help adhere to this principle? (Choose TWO)

1) Elastic Load Balancing
2) Multi-factor authentication
3) Penetration testing
4) Vertical Scaling
5) Availability Zones

A

1) Elastic Load Balancing
5) Availability Zones

Each AWS Region is a separate geographic area. Each AWS Region has multiple, isolated locations known as Availability Zones. When designing your AWS Cloud architecture, you should make sure that your system will continue to run even if failures happen. You can achieve this by deploying your AWS resources in multiple Availability zones. Availability zones are isolated from each other; therefore, if one availability zone goes down, the other Availability Zones will still be up and running, and hence your application will be more fault-tolerant. In addition to availability zones, you can build a disaster recovery solution by deploying your AWS resources in other regions. If an entire region goes down, you will still have resources in another region able to continue to provide a solution. Finally, you can use the Elastic Load Balancing service to regularly perform health checks and distribute traffic only to healthy instances.

57
Q

A company is introducing a new product to their customers, and is expecting a surge in traffic to their web application. As part of their Enterprise Support plan, which of the following provides the company with architectural and scaling guidance?

1) AWS Support Concierge Service
2) AWS Health
3) AWS Knowledge Center
4) AWS Countdown (formerly AWS Infrastructure Event Management (IEM))

A

AWS Countdown (formerly AWS Infrastructure Event Management (IEM))

AWS Countdown (formerly known as AWS Infrastructure Event Management (IEM)) partners with your technical and project resources to gain a deep understanding of your use case and provide architectural and scaling guidance for an event. Common use-case examples for AWS Countdown include business-critical events, new product launches, migrations, and advertising launches.

    AWS Countdown replaces AWS Support’s Infrastructure Event Management (IEM) service, and is included with Enterprise Support. Enterprise On-Ramp customers receive one (1) Countdown engagement each year at no cost.
58
Q

A company has moved to AWS recently. Which of the following AWS Services will help ensure that they have the proper security settings? (Choose TWO)

1) AWS Trusted Advisor
2) Concierge Support Team
3) Amazon Inspector
4) Amazon SNS
5) Amazon CloudWatch

A

1) AWS Trusted Advisor
3) Amazon Inspector

59
Q

Your company is developing a critical web application in AWS, and the security of the application is a top priority. Which of the following AWS services will provide infrastructure security optimization recommendations?

1) AWS Trusted Advisor
2) AWS Shield
3) AWS Management Console
4) AWS Secrets Manager

A

AWS Trusted Advisor

AWS Trusted Advisor is an online tool that provides you real time guidance to help you provision your resources following AWS best practices. AWS Trusted Advisor offers a rich set of best practice checks and recommendations across six categories: cost optimization, security, fault tolerance (resilience), performance, operational excellence, and service limits (also referred to as service quotas).

       AWS Trusted Advisor improves the security of your application by closing gaps, enabling various AWS security features, and examining your permissions.

The core security checks include: (Important)

1- Security Groups - Specific Ports Unrestricted.

Checks security groups for rules that allow unrestricted access to specific ports. Unrestricted access increases opportunities for malicious activity (hacking, denial-of-service attacks, loss of data).

2- Amazon S3 Bucket Permissions.

Checks buckets in Amazon Simple Storage Service (Amazon S3) that have open access permissions. Bucket permissions that grant List access to everyone can result in higher than expected charges if objects in the bucket are listed by unintended users at a high frequency. Bucket permissions that grant Upload/Delete access to everyone create potential security vulnerabilities by allowing anyone to add, modify, or remove items in a bucket. This check examines explicit bucket permissions and associated bucket policies that might override the bucket permissions.

3- MFA on Root Account.

Checks the root account and warns if multi-factor authentication (MFA) is not enabled. For increased security, AWS recommends that you protect your account by using MFA, which requires a user to enter a unique authentication code from their MFA hardware or virtual device when interacting with the AWS console and associated websites.

60
Q

What does the “Principle of Least Privilege” refer to?

1) IAM users should not be granted any permissions; to keep your account safe
2) All trusted IAM users should have access to any AWS service in the respective AWS account
3) You should grant your users only the permissions they need when they need them and nothing more
4) All IAM users should have at least the necessary permissions to access the core AWS services

A

You should grant your users only the permissions they need when they need them and nothing more

The principle of least privilege is one of the most important security practices and it means granting users the required permissions to perform the tasks entrusted to them and nothing more. The security administrator determines what tasks users need to perform and then attaches the policies that allow them to perform only those tasks. You should start with a minimum set of permissions and grant additional permissions when necessary. Doing so is more secure than starting with permissions that are too lenient and then trying to tighten them down.

61
Q

A company has decided to migrate its Oracle database to AWS. Which AWS service can help achieve this without negatively impacting the functionality of the source database?

1) AWS CloudShell
2) AWS Database Migration Service
3) AWS Application Discovery Service
4) AWS Application Migration Service

A

AWS Database Migration Service

AWS Database Migration Service (DMS) helps you migrate databases to AWS easily and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases. The service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL. It also allows you to stream data to Amazon Redshift from any of the supported sources including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, SAP ASE, and SQL Server, enabling consolidation and easy analysis of data in the petabyte-scale data warehouse. AWS Database Migration Service can also be used for continuous data replication with high availability.

62
Q

Which of the following are examples of AWS-Managed Services, where AWS is responsible for the operational and maintenance burdens of running the service? (Choose TWO)

1) AWS IAM
2) Amazon VPC
3) Amazon DynamoDB
4) Amazon Elastic Compute Cloud
5) Amazon Elastic MapReduce

A

3) Amazon DynamoDB
5) Amazon Elastic MapReduce

For managed services such as Amazon Elastic MapReduce (Amazon EMR) and DynamoDB, AWS is responsible for performing all the operations needed to keep the service running.

       Amazon EMR launches clusters in minutes. You don’t need to worry about node provisioning, infrastructure setup, Hadoop configuration, or cluster tuning. Amazon EMR takes care of these tasks so you can focus on analysis.

      DynamoDB is serverless with no servers to provision, patch, or manage and no software to install, maintain, or operate. DynamoDB automatically scales tables up and down to adjust for capacity and maintain performance. Availability and fault tolerance are built in, eliminating the need to architect your applications for these capabilities.

      Other managed services include: AWS Lambda, Amazon RDS, Amazon Redshift, Amazon CloudFront, Amazon S3 and several other services.

       For these managed services, AWS is responsible for most of the configuration and management tasks, but customers are still responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

NOTE:

        The AWS managed services we mentioned above are different than the AWS Managed Services (AMS) service. AMS is an AWS service that operates AWS on behalf of enterprise customers and partners. Enterprises want to adopt AWS at scale but often the skills that have served them well in traditional IT do not always translate to success in the cloud. AWS Managed Services (AMS) enables them to migrate to AWS at scale more quickly, reduce their operating costs, improve security and compliance and allow them to focus on innovation.
63
Q

Which service is used to ensure that messages between software components are not lost if one or more components fail?

1) AWS Direct Connect
2) Amazon SQS
3) Amazon Connect
4) Amazon SES

A

Amazon SQS

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. SQS lets you decouple application components so that they run independently, increasing the overall fault tolerance of the system. Multiple copies of every message are stored redundantly across multiple availability zones so that they are available whenever needed.

64
Q

You have deployed your application on multiple Amazon EC2 instances. Your customers complain that sometimes they can’t reach your application. Which AWS service allows you to monitor the performance of your EC2 instances to assist in troubleshooting these issues?

1) AWS CloudTrail
2) AWS Config
3) Amazon CloudWatch
4) AWS Lambda

A

Amazon CloudWatch

Amazon CloudWatch is a service that monitors AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use CloudWatch to detect anomalous behavior in your environments, take automated actions, troubleshoot issues, and discover insights to keep your applications running smoothly.