AWSExam_2 Flashcards

1
Q

Your IT Director instructed you to ensure that all of the AWS resources in your VPC don’t go beyond their respective service limits. You should prepare a system that provides you real-time guidance in provisioning your resources that adheres to the AWS best practices.

Which of the following is the MOST appropriate service to use to satisfy this task?

A

AWS Trusted Advisor

AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. It inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Amazon Inspector?

A

Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You are a Solutions Architect working for a startup which is currently migrating their production environment to AWS. Your manager asked you to set up access to the AWS console using Identity Access Management (IAM). You have created 5 users for your system administrators using the AWS CLI.

What further steps do you need to take to enable your system administrators to get access to the AWS console?

A

Provide a password for each user created and give these passwords to your system administrators..

The AWS Management Console is the web interface used to manage your AWS resources using your web browser. To access this, your users should have a password that they can use to login to the web console.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have EC2 instances running on your VPC. You have both UAT and production EC2 instances running. You want to ensure that employees who are responsible for the UAT instances don’t have the access to work on the production instances to minimize security risks. Which of the following would be the best way to achieve this?

A

Define the tags on the UAT and production servers and add a condition to the IAM policy which allows access to specific tags.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A leading e-commerce company is in need of a storage solution that can be accessed by 1000 Linux servers in multiple availability zones. The service should be able to handle the rapidly changing data at scale while still maintaining high performance. It should also be highly durable and highly available whenever the servers will pull data from it, with little need for management. As the Solutions Architect, which of the following services is the most cost-effective choice that you should use to meet the above requirement?

A

EFS

in this scenario, the keywords are rapidly changing data and 1000 Linux servers.

Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances. EFS provides the same level of high availability and high scalability like S3 however, this service is more suitable for scenarios where it is required to have a POSIX-compatible file system or if you are storing rapidly changing data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You are assigned to design a highly available architecture in AWS. You have two target groups with three EC2 instances each, which are added to an Application Load Balancer. In the security group of the EC2 instance, you have verified that the port 80 for HTTP is allowed. However, the instances are still showing out of service from the load balancer. What could be the root cause of this issue?

  • The wrong instance type was used for the EC2 instance
  • The instances are using the wrong AMI
  • The health check configuration is not properly defined
  • The wrong subnet was used in your VPC
A

The health check configuration is not properly defined

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You are working as an IT Consultant for a large media company where you are tasked to design a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. You expect this S3 bucket to immediately receive over 2000 PUT requests and 3500 GET requests per second at peak hour. What should you do to ensure optimal performance?

A

Do nothing. Amazon S3 will automatically manage performance at this scale.

Amazon S3 now provides increased performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge. Each S3 prefix can support these request rates, making it simple to increase performance significantly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

company which has both an on-premises data center as well as an AWS cloud infrastructure. They store their graphics, audios, videos, and other multimedia assets primarily in their on-premises storage server and use an S3 Standard storage class bucket as a backup. Their data are heavily used for only a week (7 days) but after that period, it will be infrequently used by their customers. You are instructed to save storage costs in AWS yet maintain the ability to fetch their media assets in a matter of minutes for a surprise annual data audit, which will be conducted both on-premises and on their cloud storage. Which of the following options should you implement to meet the above requirement? (Choose 2)

  • set af lifecycle policy in the bucket to transition to S3 - IA after 30 days
  • set a lifecycle policy in the bucket to transition the data to S3 - OneZone IA after one week (7 days)
  • set a lifecycle policy in the bucket to transition to S3 Glacier Deep Archive after one week (7 days)
  • set a lifecycle policy to transition to S3 - IA after one week (7 days)
  • set a lifecycle policy to transition to Glacier after one week (7 days)
A
  • set af lifecycle policy in the bucket to transition to S3 - IA after 30 days
    • ⇒ Objects must be stored at least 30 days in S3 standard before you can transition them to S3 IA or S3 OneZone IA
  • set a lifecycle policy to transition to Glacier after one week (7 days)
    • can retrieve data within minutes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You are setting up a cost-effective architecture for a log processing application which has frequently accessed, throughput-intensive workloads with large, sequential I/O operations. The application should be hosted in an already existing On-Demand EC2 instance in your VPC. You have to attach a new EBS volume that will be used by the application. Which of the following is the most suitable EBS volume type that you should use in this scenario?

A

EBS throughput optimized HDD (st1)

Throughput Optimized HDD (st1) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. This volume type is a good fit for large, sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing. Bootable st1 volumes are not supported.

Throughput Optimized HDD (st1) volumes, though similar to Cold HDD (sc1) volumes, are designed to support frequently accessed data. (Cold HDD for less frequently accessed workloads)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You have an existing On-demand EC2 instance and you are planning to create a new EBS volume that will be attached to this instance. The data that will be stored are confidential medical records so you have to make sure that the data is protected. How can you secure the data at rest of the new EBS volume that you will create?

A

Create an encrypted EBS volume by ticking the encryption tickbox and attach it to the instance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You created a new CloudFormation template that creates 4 EC2 instances and are connected to one Elastic Load Balancer (ELB). Which section of the template should you configure to get the Domain Name Server hostname of the ELB upon the creation of the AWS stack?

A

Outputs

Outputs is an optional section of the CloudFormation template that describes the values that are returned whenever you view your stack’s properties.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

An On-Demand EC2 instance is launched into a VPC subnet with the Network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance’s security group has an inbound rule to allow SSH from any IP address and does not have any outbound rules. In this scenario, what are the changes needed to allow SSH connection to the instance?

A

The outbound network ACL needs to be modified to allow outbound traffic

In order for you to establish an SSH connection from your home computer to your EC2 instance, you need to do the following:

  • On the Security Group, add an Inbound Rule to allow SSH traffic to your EC2 instance.
  • On the NACL, add both an Inbound and Outbound Rule to allow SSH traffic to your EC2 instance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An investment bank has a distributed batch processing application which is hosted in an Auto Scaling group of Spot EC2 instances with an SQS queue. You configured your components to use client-side buffering so that the calls made from the client will be buffered first and then sent as a batch request to SQS. What is a period of time during which the SQS queue prevents other consuming components from receiving and processing a message?

A

Visibility Timeout

Immediately after the message is received, it remains in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The maximum is 12 hours.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A web application is deployed in an On-Demand EC2 instance in your VPC. There is an issue with the application which requires you to connect to it via an SSH connection. Which of the following is needed in order to access an EC2 instance from the Internet? (Choose 3)

  • An Internet gateway
  • A Private IP address attached to the instance
  • A Public IP address attached to the instance
  • a Private Elastic IP address attached to the instance
  • A route entry to the internet gateway in the Route table of the VPC
  • a VPN peering connection
A
  • An Internet gateway
  • A Public IP address attached to the instance
  • A route entry to the internet gateway in the Route table of the VPC
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

An e-commerce application is using a fanout messaging pattern for its order management system. For every order, it sends an Amazon SNS message to an SNS topic, and the message is replicated and pushed to multiple Amazon SQS queues for parallel asynchronous processing. A Spot EC2 instance retrieves the message from each SQS queue and processes the message. There was an incident that while an EC2 instance is currently processing a message, the instance was abruptly terminated, and the processing was not completed in time. In this scenario, what happens to the SQS message?

A

when the message visibility timeout expires, the message becomes available for processing by other EC2 instances..

Because Amazon SQS is a distributed system, there’s no guarantee that the consumer actually receives the message (for example, due to a connectivity issue, or due to an issue in the consumer application). Thus, the consumer must delete the message from the queue after receiving and processing it.

Immediately after the message is received, it remains in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The maximum is 12 hour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are Dead Letter Queues?

A

Amazon SQS supports dead-letter queues, which other queues (source queues) can target for messages that can’t be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn’t succeed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You just joined a large tech company with an existing Amazon VPC. When reviewing the Auto Scaling events, you noticed that their web application is scaling up and down multiple times within the hour. What design change could you make to optimize cost while preserving elasticity?

A

Change the cooldown period of the Auto Scaling group and set the CloudWatch metric to a higher threshold….

Since the application is scaling up and down multiple times within the hour, the issue lies on the cooldown period of the Auto Scaling group.

The cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that it doesn’t launch or terminate additional instances before the previous scaling activity takes effect. After the Auto Scaling group dynamically scales using a simple scaling policy, it waits for the cooldown period to complete before resuming scaling activities.

When you manually scale your Auto Scaling group, the default is not to wait for the cooldown period, but you can override the default and honor the cooldown period. If an instance becomes unhealthy, the Auto Scaling group does not wait for the cooldown period to complete before replacing the unhealthy instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You are a working as a Solutions Architect for a fast-growing startup which just started operations during the past 3 months. They currently have an on-premises Active Directory and 10 computers. To save costs in procuring physical workstations, they decided to deploy virtual desktops for their new employees in a virtual private cloud in AWS. The new cloud infrastructure should leverage on the existing security controls in AWS but can still communicate with their on-premises network. Which set of AWS services will you use to meet these requirements?

  • AWS Directory Services, VPN connection and AWS IAM
  • AWS Directory Services, VPN Connection and Amazon workspace
  • AWS Directory Services, VPN Connection and ClassicLink
  • AWS Directory Services, VPN connection and S3
A

AWS Directory Services, VPN Connection and Amazon workspace

First, you need a VPN connection to connect the VPC and your on-premises network. Second, you need AWS Directory Services to integrate with your on-premises Active Directory and lastly, you need to use Amazon Workspace to create the needed virtual desktops in your VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You are running an EC2 instance store-based instance. You shut it down and then start the instance. You noticed that the data which you have saved earlier is no longer available. What might be the cause of this?

A

the EC2 instance was using instance store volumes, which are ephemeral and ony live for the life of the instance

An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You are working for a top IT Consultancy that has a VPC with two On-Demand EC2 instances with Elastic IP addresses. You were notified that your EC2 instances are currently under SSH brute force attacks over the Internet. Their IT Security team has identified the IP addresses where these attacks originated. You have to immediately implement a temporary fix to stop these attacks while the team is setting up AWS WAF, GuardDuty, and AWS Shield Advanced to permanently fix the security vulnerability. Which of the following provides the quickest way to stop the attacks to your instances?

A

Block the IP addresses in the Network Access Control List

(Removing the Internet Gateway from the VPC is incorrect because doing this will also make your EC2 instance inaccessible to you as it will cut down the connection to the Internet.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is a static Anycast IP address for?

A

Assigning a static Anycast IP address to each EC2 instance is primarily used by AWS Global Accelerator to enable organizations to seamlessly route traffic to multiple regions and improve availability and performance for their end-users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You have a web application hosted on a fleet of EC2 instances located in two Availability Zones that are all placed behind an Application Load Balancer. As a Solutions Architect, you have to add a health check configuration to ensure your application is highly-available. Which health checks will you implement?

A

HTTP or HTTPS health check

The type of ELB that is mentioned here is an Application Elastic Load Balancer. This is used if you want a flexible feature set for your web applications with HTTP and HTTPS traffic. Conversely, it only allows 2 types of health check: HTTP and HTTPS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

When is TCP health checks offered?

A

TCP health checks are only offered in Network Load Balancer. it is used if you need ultra-high performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

You are implementing a hybrid architecture for your company where you are connecting their Amazon Virtual Private Cloud (VPC) to their on-premises network. Which of the following can be used to create a private connection between the VPC and your company’s on-premises network?

A

Direct Connect

Direct Connect creates a direct, private connection from your on-premises data center to AWS, letting you establish a 1-gigabit or 10-gigabit dedicated network connection using Ethernet fiber-optic cable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

You are consulted by a multimedia company that needs to deploy web services to an AWS region which they have never used before. The company currently has an IAM role for their Amazon EC2 instance which permits the instance to access Amazon DynamoDB. They want their EC2 instances in the new region to have the exact same privileges. What should you do to accomplish this?

A

Assign the existing IAM role to instances in the new region

In this scenario, the company has an existing IAM role hence you don’t need to create a new one. IAM roles are global service that are available to all regions hence, all you have to do is assign the existing IAM role to the instance in the new region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

A company has 10 TB of infrequently accessed financial data files that would need to be stored in AWS. These data would be accessed infrequently during specific weeks when they are retrieved for auditing purposes. The retrieval time is not strict as long as it does not exceed 24 hours. Which of the following would be a secure, durable, and cost-effective solution for this scenario?

upload the data directly to Amazon Glacier through the AWS Management console

  • upload the data to S3 and set a lifecycle policy to transition to Glacier after 0 days
  • upload the data to S3 and transition to S3 OneZone IA
  • upload the data to S3 and transition to S3 IA
A

upload the data to S3 and set a lifecycle policy to transition to Glacier after 0 days

Glacier has a management console which you can use to create and delete vaults. However, you cannot directly upload archives to Glacier by using the management console. To upload data, such as photos, videos, and other documents, you must either use the AWS CLI or write code to make requests, by using either the REST API directly or by using the AWS SDKs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

You are managing a global news website which has a very high traffic. To improve the performance, you redesigned the application architecture to use a Classic Load Balancer with an Auto Scaling Group in multiple Availability Zones. However, you noticed that one of the Availability Zones is not receiving any traffic. What is the root cause of this issue?

  • by default, you are not allowed to use a load balancer with multi-AZ. you have to send a request form to AWS in order for this to work
  • the AZ is not properly added to the load balancer which is why it is not receiving any traffic
  • auto scaling should be disable for the load balancer to route the traffic to multiple AZs
  • the classic load balancer is down
A

the AZ is not properly added to the load balancer which is why it is not receiving any traffic

In this scenario, one of the Availability Zones is not properly added to the Elastic load balancer. Hence, that Availability Zone is not receiving any traffic.

You can set up your load balancer in EC2-Classic to distribute incoming requests across EC2 instances in a single Availability Zone or multiple Availability Zones. First, launch EC2 instances in all the Availability Zones that you plan to use. Next, register these instances with your load balancer. Finally, add the Availability Zones to your load balancer. After you add an Availability Zone, the load balancer starts routing requests to the registered instances in that Availability Zone. Note that you can modify the Availability Zones for your load balancer at any time.

By default, the load balancer routes requests evenly across its Availability Zones. To route requests evenly across the registered instances in the Availability Zones, enable cross-zone load balancing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

You have a web application running on EC2 instances which processes sensitive financial information. All of the data are stored on an Amazon S3 bucket. The financial information is accessed by users over the Internet. The security team of the company is concerned that the Internet connectivity to Amazon S3 is a security risk. In this scenario, what will you do to resolve this security concern?

A

change the web architecture to access the financial data in your S3 bucket through a Gateway VPC endpoint..

Take note that your VPC lives within a larger AWS network and the services, such as S3, DynamoDB, RDS and many others, are located outside of your VPC, but still within the AWS network. By default, the connection that your VPC uses to connect to your S3 bucket or any other service traverses the public Internet via your Internet Gateway.

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

You are planning to migrate a MySQL database from your on-premises data center to your AWS Cloud. This database will be used by a legacy batch application which has steady-state workloads in the morning but has its peak load at night for the end-of-day processing. You need to choose an EBS volume which can handle a maximum of 450 GB of data and can also be used as the system boot volume for your EC2 instance. Which of the following is the most cost-effective storage type to use in this scenario?

A

Amazon EBS general purpose SSD (gp2)

The EBS volume that you should use has to handle a maximum of 450 GB of data and can also be used as the system boot volume for your EC2 instance. Since HDD volumes cannot be used as a bootable volume, we can narrow down our options by selecting SSD volumes. In addition, SSD volumes are more suitable for transactional database workloads

General Purpose: These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time. Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 10,000 IOPS (at 3,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Your company has a web-based ticketing service that utilizes Amazon SQS and a fleet of EC2 instances. The EC2 instances that consume messages from the SQS queue are configured to poll the queue as often as possible to keep end-to-end throughput as high as possible. You noticed that polling the queue in tight loops is using unnecessary CPU cycles, resulting in increased operational costs due to empty responses. In this scenario, what will you do to make the system more cost-effective?

A

Configure the SQS to use long polling by setting the ReceiveMessageWaitTimeSeconds to a number greater than zero

The ReceiveMessageWaitTimeSeconds is the queue attribute that determines whether you are using Short or Long polling. By default, its value is zero which means it is using Short polling. If it is set to a value greater than zero, then it is Long polling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

A health organization is using a large Dedicated EC2 instance with multiple EBS volumes to host its health records web application. The EBS volumes must be encrypted due to the confidentiality of the data that they are handling and also to comply with the HIPAA (Health Insurance Portability and Accountability Act) standard. In EBS encryption, what service does AWS use to secure the volume’s data at rest? (Choose 2)

  • by using Amazon-managed keys in AWS KMS
  • by using a password stored in CloudHSM
  • by using your own keys in AWS KMS
  • by using the SSL certificates provided by the AWS Certificate Manager
  • by using S3 client-side encryption
  • by using S3 server-side encryption
A

the correct answers are: using your own keys in AWS Key Management Service (KMS) and using Amazon-managed keys in AWS Key Management Service (KMS).

Amazon EBS encryption offers seamless encryption of EBS data volumes, boot volumes, and snapshots, eliminating the need to build and maintain a secure key management infrastructure. EBS encryption enables data at rest security by encrypting your data using Amazon-managed keys, or keys you create and manage using the AWS Key Management Service (KMS).

(using S3 server-side or client-side encryption relates only to S3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

A Solutions Architect is migrating several Windows-based applications to AWS that require a scalable file system storage for high-performance computing (HPC). The storage service must have full support for the SMB protocol and Windows NTFS, Active Directory (AD) integration, and Distributed File System (DFS). Which of the following is the MOST suitable storage service that the Architect should use to fulfill this scenario?

A

Amazon FSx for Windows File Server

Amazon FSx provides fully managed third-party file systems. Amazon FSx provides you with the native compatibility of third-party file systems with feature sets for workloads such as Windows-based storage, high-performance computing (HPC), machine learning, and electronic design automation (EDA). You don’t have to worry about managing file servers and storage, as Amazon FSx automates the time-consuming administration tasks such as hardware provisioning, software configuration, patching, and backups. Amazon FSx integrates the file systems with cloud-native AWS services, making them even more useful for a broader set of workloads.

(Amazon FSx for Lustre is incorrect because this service doesn’t support the Windows-based applications as well as Windows servers.)

Amazon FSx for Windows File Server

Amazon FSx provides fully managed third-party file systems. Amazon FSx provides you with the native compatibility of third-party file systems with feature sets for workloads such as Windows-based storage, high-performance computing (HPC), machine learning, and electronic design automation (EDA). You don’t have to worry about managing file servers and storage, as Amazon FSx automates the time-consuming administration tasks such as hardware provisioning, software configuration, patching, and backups. Amazon FSx integrates the file systems with cloud-native AWS services, making them even more useful for a broader set of workloads.

(Amazon FSx for Lustre is incorrect because this service doesn’t support the Windows-based applications as well as Windows servers.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

The social media company that you are working for needs to capture the detailed information of all HTTP requests that went through their public-facing application load balancer every five minutes. They want to use this data for analyzing traffic patterns and for troubleshooting their web applications in AWS. Which of the following options meet the customer requirements?

A

enables access logs on the application load balancer

Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

A company is planning to launch a High Performance Computing (HPC) cluster in AWS that does Computational Fluid Dynamics (CFD) simulations. The solution should scale-out their simulation jobs to experiment with more tunable parameters for faster and more accurate results. The cluster is composed of Windows servers hosted on t3a.medium EC2 instances. As the Solutions Architect, you should ensure that the architecture provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. Which is the MOST suitable and cost-effective solution that the Architect should implement to achieve the above requirements?

A

enable Enhanced Networking with Elastic Network Adapter (ENA) on the windows EC2 instance..

Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networking.

Amazon EC2 provides enhanced networking capabilities through the Elastic Network Adapter (ENA). It supports network speeds of up to 100 Gbps for supported instance types. Elastic Network Adapters (ENAs) provide traditional IP networking features that are required to support VPC networking.

An Elastic Fabric Adapter (EFA) is simply an Elastic Network Adapter (ENA) with added capabilities. It provides all of the functionality of an ENA, with additional OS-bypass functionality. OS-bypass is an access model that allows HPC and machine learning applications to communicate directly with the network interface hardware to provide low-latency, reliable transport functionality.

The OS-bypass capabilities of EFAs are not supported on Windows instances. If you attach an EFA to a Windows instance, the instance functions as an Elastic Network Adapter, without the added EFA capabilities.

Hence, the correct answer is to enable Enhanced Networking with Elastic Network Adapter (ENA) on the Windows EC2 Instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

You are working for a large financial company. In their enterprise application, they want to apply a group of database-specific settings to their Relational Database Instances.

Which of the following options can be used to easily apply the settings in one go for all of the Relational database instances?

A

Parameter Groups

You manage your DB engine configuration through the use of parameters in a DB parameter group. DB parameter groups act as a container for engine configuration values that are applied to one or more DB instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

A Junior DevOps Engineer deployed a large EBS-backed EC2 instance to host a NodeJS web app in AWS which was developed by an IT contractor. He properly configured the security group and used a key pair named “tutorialsdojokey” which has a tutorialsdojokey.pem private key file. The EC2 instance works as expected and the junior DevOps engineer can connect to it using an SSH connection. The IT contractor was also given the key pair and he has made various changes in the instance as well to the files located in .ssh folder to make the NodeJS app work. After a few weeks, the IT contractor and the junior DevOps engineer cannot connect the EC2 instance anymore, even with a valid private key file. They are constantly getting a “Server refused our key” error even though their private key is valid.

In this scenario, which one of the following options is not a possible reason for this issue?

  • the SSH private key that you are using has a file permission of 0777
  • you don’t have permissions for the .ssh file
  • you’re using an SSH private key but the corresponding public is not the authorized_keys file
  • you don’t have permissions for your authorized_keys file
A

All of the options here are correct except for the option that says: The SSH private key that you are using has a file permission of 0777 because if the private key that you are using has a file permission of 0777, then it will throw an “Unprotected Private Key File” error and not a “Server refused our key” error.

You might be unable to log into an EC2 instance if:

  • You’re using an SSH private key but the corresponding public key is not in the authorized_keys file.
  • You don’t have permissions for your authorized_keys file.
  • You don’t have permissions for the .ssh folder.
  • Your authorized_keys file or .ssh folder isn’t named correctly.
  • Your authorized_keys file or .ssh folder was deleted.
  • Your instance was launched without a key, or it was launched with an incorrect key.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

You have just launched a new API Gateway service which uses AWS Lambda as a serverless computing service. In what type of protocol will your API endpoint be exposed?

A

HTTPS

All of the APIs created with Amazon API Gateway expose HTTPS endpoints only (unencrypted, HTTP endpoints are not supported)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

In a startup company you are working for, you are asked to design a web application that requires a NoSQL database that has no limit on the storage size for a given table. The startup is still new in the market and it has very limited human resources who can take care of the database infrastructure.

Which is the most suitable service that you can implement that provides a fully managed, scalable and highly available NoSQL service?

A

DynamoDB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Your manager instructed you to use Route 53 instead of an ELB to load balance the incoming request to your web application. The system is deployed to two EC2 instances to which the traffic needs to be distributed to. You want to set a specific percentage of traffic to go to each instance. Which routing policy would you use?

A

Weighted

Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes including load balancing and testing new versions of software. You can set a specific percentage of how much traffic will be allocated to the resource by specifying the weights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

The start-up company that you are working for has a batch job application that is currently hosted on an EC2 instance. It is set to process messages from a queue created in SQS with default settings. You configured the application to process the messages once a week. After 2 weeks, you noticed that not all messages are being processed by the application. What is the root cause of this issue?

A

Amazon SQS has automatically deleted the messages that have been in a queue for more than the maximum message retention period…

Amazon SQS automatically deletes messages that have been in a queue for more than the maximum message retention period. The default message retention period is 4 days. Since the queue is configured to the default settings and the batch job application only processes the messages once a week, the messages that are in the queue for more than 4 days are deleted. This is the root cause of the issue.

To fix this, you can increase the message retention period to a maximum of 14 days using the SetQueueAttributes action.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

An application is hosted in an On-Demand EC2 instance and is using Amazon SDK to communicate to other AWS services such as S3, DynamoDB, and many others. As part of the upcoming IT audit, you need to ensure that all API calls to your AWS resources are logged and durably stored. Which is the most suitable service that you should use to meet this requirement?

A

AWS CloudTrail

AWS CloudTrail increases visibility into your user and resource activity by recording AWS Management Console actions and API calls. You can identify which users and accounts called AWS, the source IP address from which the calls were made, and when the calls occurred.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

A client is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The client also uses Amazon Route 53 to manage their public DNS. How should the client configure the DNS zone apex record to point to the load balancer?

  • Create an alias for CNAME record to the load balancer DNS name
  • create a CNAME record pointing to the load balancer DNS name
  • Create an A record pointing to the IP address of the load balancer
  • Create an A record aliased to the load balancer DNS name
A

Create an A record aliased to the load balancer DNS name

Additionally, Route 53 supports the alias resource record set, which lets you map your zone apex (e.g. tutorialsdojo.com) DNS name to your load balancer DNS name. IP addresses associated with Elastic Load Balancing can change at any time due to scaling or software updates. Route 53 responds to each request for an Alias resource record set with one IP address for the load balancer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

A website is running on an Auto Scaling group of On-Demand EC2 instances which are abruptly getting terminated from time to time. To automate the monitoring process, you started to create a simple script which uses the AWS CLI to find the root cause of this issue. Which of the following is the most suitable command to use?

  • aws ec2 describe-images
  • aws ec2 get-console-screenshot
  • aws ec2 describe-volume-status
  • aws ec2 describe-instances
A

aws ec2 describe-instances….

The describe-instances command shows the status of the EC2 instances including the recently terminated instances. It also returns a StateReason of why the instance was terminated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

A news company is planning to use a Hardware Security Module (CloudHSM) in AWS for secure key storage of their web applications. You have launched the CloudHSM cluster but after just a few hours, a support staff mistakenly attempted to log in as the administrator three times using an invalid password in the Hardware Security Module. This has caused the HSM to be zeroized, which means that the encryption keys on it have been wiped. Unfortunately, you did not have a copy of the keys stored anywhere else.

How can you obtain a new copy of the keys that you have stored on Hardware Security Module?

A

the keys are lost permanently if you did not have a copy

Attempting to log in as the administrator more than twice with the wrong password zeroizes your HSM appliance. When an HSM is zeroized, all keys, certificates, and other data on the HSM is destroyed. You can use your cluster’s security group to prevent an unauthenticated user from zeroizing your HSM.

Amazon does not have access to your keys nor to the credentials of your Hardware Security Module (HSM) and therefore has no way to recover your keys if you lose your credentials. Amazon strongly recommends that you use two or more HSMs in separate Availability Zones in any production CloudHSM Cluster to avoid loss of cryptographic keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

You recently launched a fleet of on-demand EC2 instances to host a massively multiplayer online role-playing game (MMORPG) server in your VPC. The EC2 instances are configured with Auto Scaling and AWS Systems Manager. What can you use to configure your EC2 instances without having to establish a RDP or SSH connection to each instance?

A

Run Command…

You can use Run Command from the console to configure instances without having to login to each instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

You are working for a data analytics startup that collects clickstream data and stores them in an S3 bucket. You need to launch an AWS Lambda function to trigger your ETL jobs to run as soon as new data becomes available in Amazon S3. Which of the following services can you use as an extract, transform, and load (ETL) service in this scenario?

A

AWS Glue

AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. You can create and run an ETL job with a few clicks in the AWS Management Console. You simply point AWS Glue to your data stored on AWS, and AWS Glue discovers your data and stores the associated metadata (e.g. table definition and schema) in the AWS Glue Data Catalog.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

A financial analytics application that collects, processes and analyzes stock data in real-time is using Kinesis Data Streams. The producers continually push data to Kinesis Data Streams while the consumers process the data in real time. In Amazon Kinesis, where can the consumers store their results? (Choose 2)

A
  • S3
  • Redshift

Consumers (such as a custom application running on Amazon EC2, or an Amazon Kinesis Data Firehose delivery stream) can store their results using an AWS service such as Amazon DynamoDB, Amazon Redshift, or Amazon S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

A leading bank has an application that is hosted on an Auto Scaling group of EBS-backed EC2 instances. As the Solutions Architect, you need to provide the ability to fully restore the data stored in their EBS volumes by using EBS snapshots. Which of the following approaches provide the lowest cost for Amazon Elastic Block Store snapshots?

  • just maintain a single snapshot of the BS volume since the latest snapshot is both incremental and complete
  • maintain a volume snapshot, subsequent snapshots will overwrite one another
  • maintain two snapshots, the original snapshot and the latest incremental snapshot
  • maintain the most current snapshot and then archive the original and incremental snapshots to Glacier
    *
A

just maintain a single snapshot of the BS volume since the latest snapshot is both incremental and complete

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

You recently launched a news website which is expected to be visited by millions of people around the world. You chose to deploy the website in AWS to take advantage of its extensive range of cloud services and global infrastructure. Aside from AWS Region and Availability Zones, which of the following is part of the AWS Global Infrastructure that is used for content distribution?

A

Edge Locations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

An application is hosted on an EC2 instance with multiple EBS Volumes attached and uses Amazon Neptune as its database. To improve data security, you encrypted all of the EBS volumes attached to the instance to protect the confidential data stored in the volumes. Which of the following statements are true about encrypted Amazon Elastic Block Store volumes? (Choose 2)

  • all data moving between the volume and the instance are encrypted
  • snapshots are not automatically encrypted
  • snapshots are automatically encrypted
  • only the data in the volume is encrypted and not all the data moving between the volume and the instance
  • the volumes created from the encrypted snapshots are not encrypted
A

When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted:

  • Data at rest inside the volume
  • All data moving between the volume and the instance
  • All snapshots created from the volume
  • All volumes created from those snapshots

Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. You can encrypt both the boot and data volumes of an EC2 instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

You are working as a Solutions Architect for a multinational IT consultancy company where you are managing an application hosted in an Auto Scaling group of EC2 instances which stores data in an S3 bucket. You must ensure that the data are encrypted at rest using an encryption key that is both provided and managed by the company. This change should also provide AES-256 encryption to their data to comply with the strict security policy of the company. Which of the following actions should you implement to achieve this? (Choose 2)

  • implement S3 server-side encryption with AWS KMS
  • encrypt the data on the client-side before sending to S3 using their own master key
  • implement S3 server-side encryption with customer-provided keys (SSE-C)
  • use SSL to encrypt the data while in transit to S3
  • implement s3 server-side encryption with Amazon managed encryption keys
A

encrypt data on the client-side before sending to S3 using their own master key + implement s3 server-side encryption with customer-provided keys (SSE-C)

(using SSL to encrypt the data while in transit to S3 is incorrect because the requirement is to only secure the data at rest and not data in transit)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

A company has recently adopted a hybrid cloud architecture and is planning to migrate a database hosted on-premises to AWS. The database currently has over 12 TB of consumer data, handles highly transactional (OLTP) workloads, and is expected to grow exponentially. The Solutions Architect should ensure that the database is ACID-compliant and can handle complex queries of the application. Which type of database service should the Architect use?

A

Amazon Aurora…..

!=Amazon Redshift is incorrect because this is primarily used for OLAP applications and not for OLTP. Moreover, it doesn’t scale automatically to handle the exponential growth of the database.

!=Amazon DynamoDB is incorrect because although you can use this to have an ACID-compliant database, it is not capable of handling complex queries and highly transactional (OLTP) workloads.

!=Amazon RDS is incorrect because although it is an ACID-compliant relational database that can handle complex queries and transactional (OLTP) workloads, it is not scalable to handle the growth of the database. Amazon Aurora is the better choice as its underlying storage can grow automatically as needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What is AWS Database Migration Service for?

A

AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What is Amazon Neptune?

A

Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

A financial company wants to store their data in Amazon S3 but at the same time, they want to store their frequently accessed data locally on their on-premises server. This is due to the fact that they do not have the option to extend their on-premises storage, which is why they are looking for a durable and scalable storage service to use in AWS. What is the best solution for this scenario?

A

Use Storage Gateway - Cached Volumes

By using Cached volumes, you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally in your on-premises network. Cached volumes offer substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data. This is the best solution for this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

A loan processing application is hosted in a single On-Demand EC2 instance in your VPC. To improve the scalability of your application, you have to use Auto Scaling to automatically add new EC2 instances to handle a surge of incoming requests. Which of the following items should be done in order to add an existing EC2 instance to an Auto Scaling group? (Choose 2)

  • the instance is launched into one of the AZs defined in your Auto Scaling Group
  • you must stop the instance first
  • you have to ensure that the AMI used to launch the instance no longer exists
  • you have to ensure that the AMI used to launch the instance still exists
  • you have to ensure that the instance is in a different AZ as the Auto Scaling group
A

The instance that you want to attach must meet the following criteria:

  • The instance is in the running state.
  • The AMI used to launch the instance must still exist.
  • The instance is not a member of another Auto Scaling group.
  • The instance is launched into one of the Availability Zones defined in your Auto Scaling group.
  • If the Auto Scaling group has an attached load balancer, the instance and the load balancer must both be in EC2-Classic or the same VPC. If the Auto Scaling group has an attached target group, the instance and the load balancer must both be in the same VPC.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

A local bank has an in-house application which handles sensitive financial data in a private subnet. After the data is processed by the EC2 worker instances, they will be delivered to S3 for ingestion by other services. How should you design this solution so that the data does not pass through the public Internet?

A

Configure a VPC Gateway Endpoint along with a corresponding route entry that directs the data to S3

The important concept that you have to understand in the scenario is that your VPC and your S3 bucket are located within the larger AWS network. However, the traffic coming from your VPC to your S3 bucket is traversing the public Internet by default. To better protect your data in transit, you can set up a VPC endpoint so the incoming traffic from your VPC will not pass through the public Internet, but instead through the private AWS network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

You are an IT Consultant for a top investment bank which is in the process of building its new Forex trading platform. To ensure high availability and scalability, you designed the trading platform to use an Elastic Load Balancer in front of an Auto Scaling group of On-Demand EC2 instances across multiple Availability Zones. For its database tier, you chose to use a single Amazon Aurora instance to take advantage of its distributed, fault-tolerant and self-healing storage system. In the event of system failure on the primary database instance, what happens to Amazon Aurora during the failover?

A

Aurora will first attempt to create a new DB instance in the same AZ as the original instance. If unable to do so, Aurora will attempt to create a new DB instance in a different AZ.

= If you do not have an Amazon Aurora Replica (i.e. single instance) and are not running Aurora Serverless, Aurora will attempt to create a new DB Instance in the same Availability Zone as the original instance. This replacement of the original instance is done on a best-effort basis and may not succeed, for example, if there is an issue that is broadly affecting the Availability Zone.

(The options that say: Amazon Aurora flips the canonical name record (CNAME) for your DB Instance to point at the healthy replica, which in turn is promoted to become the new primary and Amazon Aurora flips the A record of your DB Instance to point at the healthy replica, which in turn is promoted to become the new primary are incorrect because this will only happen if you are using an Amazon Aurora Replica. In addition, Amazon Aurora flips the canonical name record (CNAME) and not the A record (IP address) of the instance.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

You are designing an online banking application which needs to have a distributed session data management. Currently, the application is hosted on an Auto Scaling group of On-Demand EC2 instances across multiple Availability Zones with a Classic Load Balancer that distributes the load. Which of the following options should you do to satisfy the given requirement?

A

Use Amazon ElastiCache

In this question, the keyword is distributed session data management. In AWS, you can use Amazon ElastiCache which offers fully managed Redis and Memcached service to manage and store session data for your web applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

A data analytics company keeps a massive volume of data which they store in their on-premises data center. To scale their storage systems, they are looking for cloud-backed storage volumes that they can mount using Internet Small Computer System Interface (iSCSI) devices from their on-premises application servers. They have an on-site data analytics application which frequently access the latest data subsets locally while the older data are rarely accessed. You are required to minimize the need to scale the on-premises storage infrastructure while still providing their web application with low-latency access to the data. Which type of AWS Storage Gateway service will you use to meet the above requirements?

A

Cached Volume Gateway

In this scenario, the technology company is looking for a storage service that will enable their analytics application to frequently access the latest data subsets and not the entire data set because it was mentioned that the old data are rarely being used. This requirement can be fulfilled by setting up a Cached Volume Gateway in AWS Storage Gateway.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

You are working as a Solutions Architect for a leading technology company where you are instructed to troubleshoot the operational issues of your cloud architecture by logging the AWS API call history of your AWS resources. You need to quickly identify the most recent changes made to resources in your environment, including creation, modification, and deletion of AWS resources. One of the requirements is that the generated log files should be encrypted to avoid any security issues. Which of the following is the most suitable approach to implement the encryption?

A

Use CloudTrail with its default settings…

By default, CloudTrail event log files are encrypted using Amazon S3 server-side encryption (SSE).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

You are building a prototype for a cryptocurrency news website of a small startup. The website will be deployed to a Spot EC2 Linux instance and will use Amazon Aurora as its database. You requested a spot instance at a maximum price of $0.04/hr which has been fulfilled immediately and after 90 minutes, the spot price increases to $0.06/hr and then your instance was terminated by AWS. In this scenario, what would be the total cost of running your spot instance?

A

$0.06

Since the Spot instance has been running for more than an hour, which is past the first instance hour, this means that you will be charged from the time it was launched till the time it was terminated by AWS. The computation for your 90 minute usage would be $0.04 (60 minutes) + $0.02 (30 minutes) = $0.06 hence, the correct answer is $0.06.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

How will I be charged if my Spot instance is interrupted?

A

If your Spot instance is terminated or stopped by Amazon EC2 in the first instance hour, you will not be charged for that usage. However, if you terminate the instance yourself, you will be charged to the nearest second. If the Spot instance is terminated or stopped by Amazon EC2 in any subsequent hour, you will be charged for your usage to the nearest second. If you are running on Windows and you terminate the instance yourself, you will be charged for an entire hour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

You are setting up a configuration management in your existing cloud architecture where you have to deploy and manage your EC2 instances including the other AWS resources using Chef and Puppet. Which of the following is the most suitable service to use in this scenario?

A

AWS OpsWorks

AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

You are working as an IT Consultant for a large financial firm. They have a requirement to store irreproducible financial documents using Amazon S3. For their quarterly reporting, the files are required to be retrieved after a period of 3 months. There will be some occasions when a surprise audit will be held, which requires access to the archived data that they need to present immediately. What will you do to satisfy this requirement in a cost-effective way?

A

Amazon S3 IA

In this scenario, the requirement is to have a storage option that is cost-effective and has the ability to access or retrieve the archived data immediately. The cost-effective options are Amazon Glacier Deep Archive and Amazon S3 Standard- Infrequent Access (Standard - IA). However, the former option is not designed for rapid retrieval of data which is required for the surprise audit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

You have an On-Demand EC2 instance with an attached EBS volume. There is a scheduled job that creates a snapshot of this EBS volume every midnight at 12 AM when the instance is not used. One night, there has been a production incident where you need to perform a change on both the instance and on the EBS volume at the same time, when the snapshot is currently taking place. Which of the following scenario is true when it comes to the usage of an EBS volume while the snapshot is in progress?

A

The EBS volume can be used while the snapshot is in progress…

Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed.

While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume hence, you can still use the EBS volume normally.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

An application is hosted in an Auto Scaling group of EC2 instances. To improve the monitoring process, you have to configure the current capacity to increase or decrease based on a set of scaling adjustments. This should be done by specifying the scaling metrics and threshold values for the CloudWatch alarms that trigger the scaling process. Which of the following is the most suitable type of scaling policy that you should use?

A

Step Scaling

Amazon EC2 Auto Scaling supports the following types of scaling policies:

Target tracking scaling - Increase or decrease the current capacity of the group based on a target value for a specific metric. This is similar to the way that your thermostat maintains the temperature of your home – you select a temperature and the thermostat does the rest.

Step scaling - Increase or decrease the current capacity of the group based on a set of scaling adjustments, known as step adjustments, that vary based on the size of the alarm breach.

Simple scaling - Increase or decrease the current capacity of the group based on a single scaling adjustment.

If you are scaling based on a utilization metric that increases or decreases proportionally to the number of instances in an Auto Scaling group, then it is recommended that you use target tracking scaling policies. Otherwise, it is better to use step scaling policies instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Your IT Manager asks you to create a decoupled application whose process includes dependencies on EC2 instances and servers located in your company’s on-premises data center. Which of these options are you least likely to recommend as part of that process?

  • Establish a Direct Connect connection from your on-premises network and VPC
  • SQS Polling from an EC2 instance using IAM user credentials
  • SQS polling from an EC2 instance deployed with an IAM role
  • An SWF workflow
A

SQS polling from an EC2 instance using IAM user credentials

For decoupled applications, it is best to use SWF and SQS which are both available in all options. Note that this question asks you for the option that you would LEAST likely to recommend.

SQS polling from an EC2 instance using IAM user credentials is not the recommended way to do so. It should use an IAM role instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

You are working as a Solutions Architect in a global investment bank which requires corporate IT governance and cost oversight of all of their AWS resources across their divisions around the world. Their corporate divisions want to maintain administrative control of the discrete AWS resources they consume and ensure that those resources are separate from other divisions. Which of the following options will support the autonomy of each corporate division while enabling the corporate IT to maintain governance and cost oversight? (Select TWO.)

  • use AWS consolidated billing by creating AWS organizations to link the divisions’ accounts to a parent corporate account
  • create separate VPCs for each division within the corporate IT aws account
  • enable IAM cross-account access for all corporate IT administrators in each child account
  • create separate availability zones for each division within the corporate IT aws account
A

In this scenario, enabling IAM cross-account access for all corporate IT administrators in each child account and using AWS Consolidated Billing by creating AWS Organizations to link the divisions’ accounts to a parent corporate account are the correct choices. The combined use of IAM and Consolidated Billing will support the autonomy of each corporate division while enabling corporate IT to maintain governance and cost oversight.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

You are working as an AWS Engineer in a major telecommunications company in which you are tasked to make a network monitoring system. You launched an EC2 instance to host the monitoring system and used CloudWatch to monitor, store, and access the log files of your instance. Which of the following provides an automated way to send log data to CloudWatch Logs from your Amazon EC2 instance?

A

CloudWatch Logs agent

The CloudWatch Logs agent provides an automated way to send log data to CloudWatch Logs from Amazon EC2 instances. The agent is comprised of the following components: A plug-in to the AWS CLI that pushes log data to CloudWatch Logs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

You are trying to enable Cross-Region Replication to your S3 bucket but this option is disabled. Which of the following options is a valid reason for this?

  • in order to use Cross-Region Replication feature in S3, you need to first enable versioning on the bucket
  • the Cross-Region Replication feature is only available for S3-IA
  • the Cross-Region Replication feature is only available for Amason S3 RRS
  • this is a premium feature only available for AWS enterprise aacoutns
A

in order to use Cross-Region Replication feature in S3, you need to first enable versioning on the bucket

To enable the cross-region replication feature in S3, the following items should be met:

  • The source and destination buckets must have versioning enabled.
  • The source and destination buckets must be in different AWS Regions.
  • Amazon S3 must have permissions to replicate objects from that source bucket to the destination bucket on your behalf.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

A WordPress website hosted in an EC2 instance, which has an additional EBS volume attached, was mistakenly deployed in the us-east-1a Availability Zone due to a misconfiguration in your CloudFormation template. There is a requirement to quickly rectify the issue by moving and attaching the EBS volume to a new EC2 instance in the us-east-1b Availability Zone. As the Solutions Architect of the company, which of the following should you do to solve this issue?

A

First, create a snapshot of the EBS volume. Afterwards, create a volume using thr snapshot in the other AZ.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

A company would like to store their old yet confidential corporate files that are infrequently accessed. Which is the MOST cost-efficient solution in AWS that should you recommend?

  • S3
  • Glacier
  • Storage Gateway
  • EBS
A

Glacier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

A multinational company has been building its new data analytics platform with high-performance computing workloads (HPC) which requires a scalable, POSIX-compliant storage service. The data need to be stored redundantly across multiple AZs and allows concurrent connections from thousands of EC2 instances hosted on multiple Availability Zones. Which of the following AWS storage service is the most suitable one to use in this scenario?

A

EFS

In this question, you should take note of this phrase: “allows concurrent connections from multiple EC2 instances”. There are various AWS storage options that you can choose but whenever these criteria show up, always consider using EFS instead of using EBS Volumes which is mainly used as a “block” storage and can only have one connection to one EC2 instance at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

You are working as a Solutions Architect for a major accounting firm, and they have a legacy general ledger accounting application that needs to be moved to AWS. However, the legacy application has a dependency on multicast networking. On this scenario, which of the following options should you consider to ensure the legacy application works in AWS?

  • all of the above
  • provision Elastic Network Interfaces between the subnets
  • Create all the subnets on another VPC and enable VPC peering
  • create a virtual overlay network on the OS level of the instance
A

create a virtual overlay network on the OS level of the instance

Creating a virtual overlay network running on the OS level of the instance is correct because overlay multicast is a method of building IP level multicast across a network fabric supporting unicast IP routing, such as Amazon Virtual Private Cloud (Amazon VPC).

(Amazon VPC does not support multicast or broadcast networking)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

You have a fleet of running Spot EC2 instances behind an Application Load Balancer. The incoming traffic comes from various users across multiple AWS regions and you would like to have the user’s session shared among your fleet of instances. You are required to set up a distributed session management layer that will provide a scalable and shared data storage for the user sessions. Which of the following would be the best choice to meet the requirement while still providing sub-millisecond latency for your users?

  • ElastiCache in-memory caching
  • Multi-master DynamoDB
  • ELB sticky sessions
  • Multi-AZ RDS
A

ElastiCache in-memory caching

For sub-millisecond latency caching, ElastiCache is the best choice.

(Multi-master DynamoDB and Multi-AZ RDS are incorrect because although you can use DynamoDB and RDS for storing session state, these two are not the best choices in terms of cost-effectiveness and performance when compared to ElastiCache. There is a significant difference in terms of latency if you used DynamoDB and RDS when you store the session data.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

You recently created a brand new IAM User with a default setting using AWS CLI. This is intended to be used to send API requests to your S3, DynamoDB, Lambda, and other AWS resources of your cloud infrastructure. Which of the following must be done to allow the user to make API calls to your AWS resources?

  • Enable MFA for the user
  • create a set of Access Keys for the user and attach the necessary permissions
  • Assign an IAM policy to the user to allow it to send API calls
  • Do nothing as the IAM user is already capable of sending API calls to your AWS resources
A

create a set of Access Keys for the user and attach the necessary permissions

You can choose the credentials that are right for your IAM user. When you use the AWS Management Console to create a user, you must choose to at least include a console password or access keys. By default, a brand new IAM user created using the AWS CLI or AWS API has no credentials of any kind. You must create the type of credentials for an IAM user based on the needs of your user. + Assigning an IAM Policy to the user to allow it to send API calls is incorrect because adding a new IAM policy to the new user will not grant the needed Access Keys needed to make API calls to the AWS resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

You are working for a startup that builds Internet of Things (IOT) devices and monitoring application. They are using IOT sensors to monitor all data by using Amazon Kinesis configured with default settings. You then send the data to an Amazon S3 bucket after 2 days. When you checked the data in S3, there are only data for the last day and nothing for the first day. What is the root cause of this issue?

A

By default, data records in Kinesis are only accessible for 24 hours from the time they are added to a stream

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

The company that you are working for has instructed you to create a cost-effective cloud solution for their online movie ticketing service. Your team has designed a solution of using a fleet of Spot EC2 instances to host the new ticketing web application. You requested a spot instance at a maximum price of $0.06/hr which has been fulfilled immediately. After 45 minutes, the spot price increased to $0.08/hr and then your instance was terminated by AWS. What was the total EC2 compute cost of running your spot instances?

A

$0.00

If your Spot instance is terminated or stopped by Amazon EC2 in the first instance hour, you will not be charged for that usage. However, if you terminate the instance yourself, you will be charged to the nearest second.

If the Spot instance is terminated or stopped by Amazon EC2 in any subsequent hour, you will be charged for your usage to the nearest second. If you are running on Windows and you terminate the instance yourself, you will be charged for an entire hour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

Your boss has asked you to launch a new MySQL RDS which ensures that you are available to recover from a database crash. Which of the below is not a recommended practice for RDS?

  • use MyISAM as the storage engine for MySQL
  • partition your large tables so that file sizes does not exceed the 16 TB limit
  • ensure that automated backups are enabled for the RDS
  • use the InnoDB as the storage engine for MySQL
A

Using MyISAM as the storage engine for MySQL is not recommended. The recommended storage engine for MySQL is InnoDB and not MyISAM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

A multinational corporate and investment bank is regularly processing steady workloads of accruals, loan interests, and other critical financial calculations every night at 10 PM to 3 AM on their on-premises data center for their corporate clients. Once the process is done, the results are then uploaded to the Oracle General Ledger which means that the processing should not be delayed nor interrupted. The CTO has decided to move their IT infrastructure to AWS to save cost and to improve the scalability of their digital financial services. As the Senior Solutions Architect, how can you implement a cost-effective architecture in AWS for their financial system?

A

use Scheduled Reserved instances, which compute capacity that is always on the specified recurring schedule

Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

You were hired as an IT Consultant in a startup cryptocurrency company that wants to go global with their international money transfer app. Your project is to make sure that the database of the app is highly available on multiple regions.

What are the benefits of adding Multi-AZ deployments in Amazon RDS? (Select TWO.)

  • creates a primary DB instance and synchronously replicates the data to a standby instance in a different AZ in a different region
  • provides SQL optimization
  • it makes the database fault-tolerant to an AZ failure
  • increased database availability in the case of system upgrades like OS patching or DB instance scaling
  • significantly increase the database performance
A
  • it makes the database fault-tolerant to an AZ failure
  • increased database availability in the case of system upgrades like OS patching or DB instance scaling
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

You are working for a weather station in Asia with a weather monitoring system that needs to be migrated to AWS. Since the monitoring system requires a low network latency and high network throughput, you decided to launch your EC2 instances to a new cluster placement group. The system was working fine for a couple of weeks, however, when you try to add new instances to the placement group that already has running EC2 instances, you receive an ‘insufficient capacity error’. How will you fix this issue?

  • create another Placement Group and launch the new instances in the new group
  • Submit a capacity increase request to AWS as you are initially limited to only 12 instances per placement group
  • verify all running instances are of the same size and type and then try the launch again
  • stop and restart the instances in the Placement Group and then try the launch again
A

stop and restart the instances in the Placement Group and then try the launch again

The option that says: Stop and restart the instances in the Placement Group and then try the launch again is correct because you can resolve this issue just by launching again. If the instances are stopped and restarted, AWS may move the instances to a hardware that has capacity for all the requested instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

As the Solutions Architect, you have built a photo-sharing site for an entertainment company. The site was hosted using 3 EC2 instances in a single availability zone with a Classic Load Balancer in front to evenly distribute the incoming load. What should you do to enable your Classic Load Balancer to bind a user’s session to a specific instance?

A

Sticky sessions

By default, a Classic Load Balancer routes each request independently to the registered instance with the smallest load. However, you can use the sticky session feature (also known as session affinity), which enables the load balancer to bind a user’s session to a specific instance. This ensures that all requests from the user during the session are sent to the same instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

A tech company is running two production web servers hosted on Reserved EC2 instances with EBS-backed root volumes. These instances have a consistent CPU load of 90%. Traffic is being distributed to these instances by an Elastic Load Balancer. In addition, they also have Multi-AZ RDS MySQL databases for their production, test, and development environments.

What recommendation would you make to reduce cost in this AWS environment without affecting availability and performance of mission-critical systems? Choose the best answer.

A
  • consider not using a multi-AZ RDS deployment for the development and test data

One thing that you should notice here is that the company is using Multi-AZ databases in all of their environments, including their development and test environment. This is costly and unnecessary as these two environments are not critical. It is better to use Multi-AZ for production environments to reduce costs, which is why the option that says: Consider not using a Multi-AZ RDS deployment for the development and test database is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

You have several EC2 Reserved Instances in your account that needs to be decommissioned and shut down since they are no longer required. The data is still required by the Audit team. Which of the following steps can be taken for this scenario? (Select TWO.)

  • convert the Ec2 instance to on-demand instances
  • you can opt to sell these EC2 instances on the AWS Reserved Instance marketplace
  • Convert the EC2 instances to Spot instances with a persistent Spot request type
  • take snapshots of the EBS volumes and terminate the EC2 instances
A

You can create a snapshot of the instance to save its data and then sell the instance to the Reserved Instance Marketplace.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

You deployed a web application to an EC2 instance that adds a variety of photo effects to a picture uploaded by the users. The application will put the generated photos to an S3 bucket by sending PUT requests to the S3 API. What is the best option for this scenario considering that you need to have API credentials to be able to send a request to the S3 API?

A

Create a role in IAM. Afterwards, assign this role to a new EC2 instance.

The best option is to create a role in IAM. Afterwards, assign this role to a new EC2 instance. Applications must sign their API requests with AWS credentials. Therefore, if you are an application developer, you need a strategy for managing credentials for your applications that run on EC2 instances.

(storing your API credentials in S3 Glacier is incorrect as S3 Glacier is used for data archives and not for managing API credentials)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Your company has developed a financial analytics web application hosted in a Docker container using MEAN (MongoDB, Express.js, AngularJS, and Node.js) stack. You want to easily port that web application to AWS Cloud which can automatically handle all the tasks such as balancing load, auto-scaling, monitoring, and placing your containers across your cluster. Which of the following services can be used to fulfill this requirement?

A

AWS elastic beanstalk

Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren’t supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

What is OpsWork?

A

AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

What is AWS CodeDeploy?

A

CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda functions. It allows you to rapidly release new features, update Lambda function versions, avoid downtime during application deployment, and handle the complexity of updating your applications, without many of the risks associated with error-prone manual deployments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

A web application is hosted on a fleet of EC2 instances inside an Auto Scaling Group with a couple of Lambda functions for ad hoc processing. Whenever you release updates to your application every week, there are inconsistencies where some resources are not updated properly. You need a way to group the resources together and deploy the new version of your code consistently among the groups with minimal downtime. Which among these options should you do to satisfy the given requirement with the least effort?

A

Use deployment groups in CodeDeploy to automate code deployments in a consistent manner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

A commercial bank has designed their next generation online banking platform to use a distributed system architecture. As their Software Architect, you have to ensure that their architecture is highly scalable, yet still cost-effective. Which of the following will provide the most suitable solution for this scenario?

  • launch multiple EC2 instances behind an ALB to host your application services, and SWF which will act as a highly-scalable buffer that stores messages as they travel between distributed applications
  • Launch an Auto-scaling group of EC2 instances to host your application services and an SQS queue. Include an Auto Scaling trigger to watch the SQS queue size which will either scale in or out the number of EC2 instances based on the queue
  • launch multiple on demand ec2 instances to host your application services and an SQS queue which will act as a highly-scalable buffer that stores messages as they travel between distributed applications
A

Launch an Auto-scaling group of EC2 instances to host your application services and an SQS queue. Include an Auto Scaling trigger to watch the SQS queue size which will either scale in or out the number of EC2 instances based on the queue

There are three main parts in a distributed messaging system: the components of your distributed system which can be hosted on EC2 instance; your queue (distributed on Amazon SQS servers); and the messages in the queue.

To improve the scalability of your distributed system, you can add Auto Scaling group to your EC2 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

You are the Solutions Architect of a software development company where you are required to connect the on-premises infrastructure to their AWS cloud. Which of the following AWS services can you use to accomplish this? (Select TWO.)

  • AWS direct connect
  • VPC peering
  • NAT Gateway
  • Amazon Connect
  • IPsec VPN connection
A

Direct Connect + IPsec VPN Connection

You can connect your VPC to remote networks by using a VPN connection which can be Direct Connect, IPsec VPN connection, AWS VPN CloudHub, or a third party software VPN appliance. Hence, IPsec VPN connection and AWS Direct Connect are the correct answers.

(Amazon Connect is incorrect because this is not a VPN connectivity option. It is actually a self-service, cloud-based contact center service in AWS that makes it easy for any business to deliver better customer service at a lower cost. Amazon Connect is based on the same contact center technology used by Amazon customer service associates around the world to power millions of customer conversations.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

What is Amazon Connect?

A

It is actually a self-service, cloud-based contact center service in AWS that makes it easy for any business to deliver better customer service at a lower cost. Amazon Connect is based on the same contact center technology used by Amazon customer service associates around the world to power millions of customer conversations.)

(Amazon Connect is NOT a VPN connectivity option unlike Direct Connect.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

A multinational manufacturing company has multiple accounts in AWS to separate their various departments such as finance, human resources, engineering and many others. There is a requirement to ensure that certain access to services and actions are properly controlled to comply with the security policy of the company. As the Solutions Architect, which is the most suitable way to set up the multi-account AWS environment of the company?

  • use AWS Organizations and Service Control Policies to control services on each account
  • Connect all departments by setting up a cross-account access to each of the AWS accounts of the company. Create and attach IAM policies to your resources based on their respective departments to control access.
  • set up a common IAM policy that can be applied across all AWS accounts
  • provide access to externally authenticated users via Identity Federation. Set up an IAM role to specify permissions for users from each department whose identity is federated from your organization or a third-party identity provider
A

use AWS Organizations and Service Control Policies to control services on each account

AWS Organizations offers policy-based management for multiple AWS accounts. With Organizations, you can create groups of accounts, automate account creation, apply and manage policies for those groups. Organizations enables you to centrally manage policies across multiple accounts, without requiring custom scripts and manual processes. It allows you to create Service Control Policies (SCPs) that centrally control AWS service use across multiple AWS accounts.

(The option that says: Connect all departments by setting up a cross-account access to each of the AWS accounts of the company. Create and attach IAM policies to your resources based on their respective departments to control access is incorrect because although you can set up cross-account access to each department, this entails a lot of configuration compared with using AWS Organizations and Service Control Policies (SCPs). Cross-account access would be a more suitable choice if you only have two accounts to manage, but not for multiple accounts.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

You are a Solutions Architect in an intelligence agency that is currently hosting a learning and training portal in AWS. Your manager instructed you to launch a large EC2 instance with an attached EBS Volume and enable Enhanced Networking. What are the valid case scenarios in using Enhanced Networking? (Select TWO.)

  • when you need consistently lower inter-instance latencies
  • when you need high latency networking
  • when you need a dedicated connection to your on-premises data center
  • when you need a low packet-per-second performance
  • when you need a higher packet-per-second performance
A

Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networking.

  • when you need consistently lower inter-instance latencies
  • when you need a higher packet-per-second performance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

You are a Solutions Architect working for a software development company. You are planning to launch a fleet of EBS-backed EC2 instances and want to automatically assign each instance with a static private IP address which does not change even if the instances are restarted. What should you do to accomplish this?

A

Launch the instances in the AWS VPC

In EC2-Classic, your EC2 instance receives a private IPv4 address from the EC2-Classic range each time it’s started. In EC2-VPC on the other hand, your EC2 instance receives a static private IPv4 address from the address range of your default VPC. Hence, the correct answer is launching the instances in the Amazon Virtual Private Cloud (VPC) and not launching the instances in EC2-Classic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

You are working for a startup which develops an AI-based traffic monitoring service. You need to register a new domain called www.tutorialsdojo-ai.com and set up other DNS entries for the other components of your system in AWS. Which of the following is not supported by Amazon Route 53?

  • DNSSEC (Domain Name System Security Extensions)
  • PTR (pointer record)
  • SRV (service locator)
  • SPF (sender policy framework)
A

Amazon Route 53’s DNS services does not support DNSSEC at this time. However, their domain name registration service supports configuration of signed DNSSEC keys for domains when DNS service is configured at another provider.

AWS Route 53 currently supports:

  • -A (address record)
  • -AAAA (IPv6 address record)
  • -CNAME (canonical name record)
  • -CAA (certification authority authorization)
  • -MX (mail exchange record)
  • -NAPTR (name authority pointer record)
  • -NS (name server record)
  • -PTR (pointer record)
  • -SOA (start of authority record)
  • -SPF (sender policy framework)
  • -SRV (service locator)
  • -TXT (text record)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

A top university has recently launched its online learning portal where the students can take e-learning courses from the comforts of their homes. The portal is on a large On-Demand EC2 instance with a single Amazon Aurora database. How can you improve the availability of your Aurora database to prevent any unnecessary downtime of the online portal?

A

Use Multi-AZ deployment! (this is the best you can do)

next, Create Amazon Aurora Replicas (For most use cases, including read scaling and high availability, it is recommended using Amazon Aurora Replicas)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

AWS hosts a variety of public datasets such as satellite imagery, geospatial, or genomic data that you want to use for your web application hosted in Amazon EC2. If you use these datasets, how much will it cost you?

A

AWS hosts a variety of public datasets that anyone can access for free.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

You have designed and built a new AWS architecture. After deploying your application to an On-demand EC2 instance, you found that there is an issue in your application when connecting to port 443. After troubleshooting the issue, you added port 443 to the security group of the instance.How long will it take before the changes are applied to all of the resources in your VPC?

A

Immediately

The correct answer is Immediately. Changes made in a security group are immediately implemented. There is no need to wait for some amount of time for propagation nor reboot any instances for your changes to take effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

A Solutions Architect designed a real-time data analytics system based on Kinesis Data Stream and Lambda. A week after the system has been deployed, the users noticed that it performed slowly as the data rate increases. The Architect identified that the performance of the Kinesis Data Streams is causing this problem. Which of the following should the Architect do to improve performance?

A

Increase the number of shards of the Kinesis stream by using the “UpdateShardCount” command

Amazon Kinesis Data Streams supports resharding, which lets you adjust the number of shards in your stream to adapt to changes in the rate of data flow through the stream.

There are two types of resharding operations: shard split and shard merge. In a shard split, you divide a single shard into two shards. In a shard merge, you combine two shards into a single shard. Splitting increases the number of shards in your stream and therefore increases the data capacity of the stream. Because you are charged on a per-shard basis, splitting increases the cost of your stream. Similarly, merging reduces the number of shards in your stream and therefore decreases the data capacity—and cost—of the stream.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

You are working as a Senior Solutions Architect for a data analytics company which has a VPC for their human resource department, and another VPC located on a different region for their finance department. You need to configure your architecture to allow the finance department to access all resources that are in the human resource department and vice versa. Which type of networking connection in AWS should you set up to satisfy the above requirement?

A

Inter-Region VPC Peering

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

You are working as the Solutions Architect for a global technology consultancy firm which has an application that uses multiple EC2 instances located in various AWS regions such as US East (Ohio), US West (N. California), and EU (Ireland). Your manager instructed you to set up a latency-based routing to route incoming traffic for www.tutorialsdojo.com to all the EC2 instances across all AWS regions. Which of the following options can satisfy the given requirement?

A

Use Route 42 latency based routing to distribute the load to the multiple EC2 instances across all AWS regions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

What is AWS DataSync?

A

simply a service that provides a fast way to move large amounts of data online between on-premises storage and Amazon S3 or Amazon EFS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

A mobile application stores pictures in Amazon Simple Storage Service (S3) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for this scenario?

A

Web Identity Federation

With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure because you don’t have to embed and distribute long-term security credentials with your application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

You are working as a Solution Architect for a startup in Silicon Valley. Their application architecture is currently set up to store both the access key ID and the secret access key in a plain text file on a custom Amazon Machine Image (AMI). The EC2 instances, which are created by using this AMI, are using the stored access keys to connect to a DynamoDB table. What should you do to make the current architecture more secure?

A

Remove the stored access keys in the AMI. Create a new IAM role with permissions to access the DynamoDB table and assign it to the EC2 instances…

You should use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use an IAM role, you don’t have to distribute long-term credentials (such as a user name and password or access keys) to an EC2 instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

A startup company wants to launch a fleet of EC2 instances on AWS. Your manager wants to ensure that the Java programming language is installed automatically when the instance is launched. In which of the below configurations can you achieve this requirement?

  • User data
  • EC2Config Service
  • AWS Config
  • IAM Roles
A

User Data

When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can write and run scripts that install new packages, software, or tools in your instance when it is launched.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

What is AWS Config?

A

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWSresources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

You are setting up the required compute resources in your VPC for your application which have workloads that require high, sequential read and write access to very large data sets on local storage. Which of the following instance type is the most suitable one to use in this scenario?

A

Storage Optimized Instances

Storage Optimized Instances is the correct answer. Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

What are Memory Optimized Instances for?

A

Memory Optimized Instances are designed to deliver fast performance for workloads that process large data sets in memory, which is quite different from handling high read and write capacity on local storage (as Storage Optimized)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

What are Compute Optimized Instances for?

A

Compute Optimized Instances are ideal for compute-bound applications that benefit from high-performance processors, such as batch processing workloads and media transcoding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

A startup is building an AI-based face recognition application in AWS, where they store millions of images in an S3 bucket. As the Solutions Architect, you have to ensure that each and every image uploaded to their system is stored without any issues. What is the correct indication that an object was successfully stored when you put objects in Amazon S3?

A

HTTP 200 result code and MD5 checksum

If you triggered an S3 API call and got HTTP 200 result code and MD5 checksum, then it is considered as a successful upload. The S3 API will return an error code in case the upload is unsuccessful.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

You are working as a Solutions Architect in a well-funded financial startup. The CTO instructed you to launch a cryptocurrency mining server on a Reserved EC2 instance in us-east-1 region’s private subnet which is using IPv6. Due to the financial data that the server contains, the system should be secured to avoid any unauthorized access and to meet the regulatory compliance requirements. In this scenario, which VPC feature allows the EC2 instance to communicate to the Internet but prevents inbound traffic?

A

Egress-only Internet gateway

An egress-only Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in your VPC to the Internet, and prevents the Internet from initiating an IPv6 connection with your instances.

Take note that an egress-only Internet gateway is for use with IPv6 traffic only. To enable outbound-only Internet communication over IPv4, use a NAT gateway instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

You are working as a Cloud Engineer in a leading technology consulting firm which is using a fleet of Windows-based EC2 instances with IPv4 addresses launched in a private subnet. Several software installed in the EC2 instances are required to be updated via the Internet. Which of the following services can provide you with a highly available solution to safely allow the instances to fetch the software patches from the Internet but prevent outside network from initiating a connection?

A

NAT Gateway

(Egress-Only Internet Gateway is incorrect because this is primarily used for VPCs that use IPv6 to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances, just like what NAT Instance and NAT Gateway do. The scenario explicitly says that the EC2 instances are using IPv4 addresses which is why Egress-only Internet gateway is invalid, even though it can provide the required high availability.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

A web application, which is hosted in your on-premises data center and uses a MySQL database, must be migrated to AWS Cloud. You need to ensure that the network traffic to and from your RDS database instance is encrypted using SSL. For improved security, you have to use the profile credentials specific to your EC2 instance to access your database, instead of a password. Which of the following should you do to meet the above requirement?

  • launch a new RDS database instance with Backtrack feature enabled
  • Launch the mysql client using the –ssl-ca parameter when connecting to the database
  • set up an RDS database and enable the IAM DB Authentication
  • configure your RDS database to enable encryption
A

set up an RDS database and enable the IAM DB Authentication

You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB instance. Instead, you use an authentication token.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

You are instructed by your manager to create a publicly accessible EC2 instance by using an Elastic IP (EIP) address and also to give him a report on how much it will cost to use that EIP. Which of the following statements is correct regarding the pricing of EIP?

  • There is no cost if the instance is stopped and it has only one associated EIP
  • there is no cost if the instance is running and it has at least two associated EIP
  • there is no cost if the instance is terminated and it has only one associated EIP
  • there is no cost if the instance is running and it has only one associated EIP
A

there is no cost if the instance is running and it has only one associated EIP

An Elastic IP address doesn’t incur charges as long as the following conditions are true:

  • -The Elastic IP address is associated with an Amazon EC2 instance.
  • -The instance associated with the Elastic IP address is running.
  • -The instance has only one Elastic IP address attached to it.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

A fast food company is using AWS to host their online ordering system which uses an Auto Scaling group of EC2 instances deployed across multiple Availability Zones with an Application Load Balancer in front. To better handle the incoming traffic from various digital devices, you are planning to implement a new routing system where requests which have a URL of <server>/api/android are forwarded to one specific target group named "Android-Target-Group". Conversely, requests which have a URL of <server>/api/ios are forwarded to another separate target group named "iOS-Target-Group". How can you implement this change in AWS?</server></server>

A

use path conditions to define rules that forward request to different target groups based on the URL in the request

You can use path conditions to define rules that forward requests to different target groups based on the URL in the request (also known as path-based routing). This type of routing is the most appropriate solution for this scenario

OBS… only Application Load Balancer supports path-based routing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

You are working for a global news network where you have set up a CloudFront distribution for your web application. However, you noticed that your application’s origin server is being hit for each request instead of the AWS Edge locations, which serve the cached objects. The issue occurs even for the commonly requested objects. What could be a possible cause of this issue?

  • an object is only cached by Cloudfront once a succesful request has been made hence, the objects were not requested before, which is why the request is still directed to the origin server
  • the Cache-control max-age directive is set to zero
  • you did not add an SSL certificate
  • the file sizes of the cached objects are too large for CloudFront to handle
A

In this scenario, the main culprit is that the Cache-Control max-age directive is set to a low value, which is why the request is always directed to your origin server. Hence the correct answer is the option that says: The Cache-Control max-age directive is set to zero.

The Cache-Control and Expires headers control how long objects stay in the cache. The Cache-Control max-age directive lets you specify how long (in seconds) you want an object to remain in the cache before CloudFront gets the object again from the origin server. The minimum expiration time CloudFront supports is 0 seconds for web distributions and 3600 seconds for RTMP distributions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

A company is planning to deploy a High Performance Computing (HPC) cluster in its VPC that requires a scalable, high-performance file system. The storage service must be optimized for efficient workload processing, and the data must be accessible via a fast and scalable file system interface. It should also work natively with Amazon S3 that enables you to easily process your S3 data with a high-performance POSIX interface. Which of the following is the MOST suitable service that you should use for this scenario?

A

Amazon FSx for Lustre

For compute-intensive and fast processing workloads, like high-performance computing (HPC), machine learning, EDA, and media processing, Amazon FSx for Lustre, provides a file system that’s optimized for performance, with input and output stored on Amazon S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

A game development company operates several virtual reality (VR) and augmented reality (AR) games which use various RESTful web APIs hosted on their on-premises data center. Due to the unprecedented growth of their company, they decided to migrate their system to AWS Cloud to scale out their resources as well to minimize costs. Which of the following should you recommend as the most cost-effective and scalable solution to meet the above requirement?

  • set up a micro service architecture with ECS, ECR and Fargate
  • Use AWS Lambda and Amazon API Gateway
  • Host the APIs in a static S3 web hosting bucket behind a CloudFront web distribution
  • use a spot fleet of amazon ec2 instances, each with an Elastic Fabric Adapter for more consistent latency and higher network throughput. Set up an ALB to distribute traffic to the instances.
A

Use AWS Lambda and Amazon API Gateway

The best possible answer here is to use Lambda and API Gateway because this solution is both scalable and cost-effective. You will only be charged when you use your Lambda function, unlike having an EC2 instance which always runs even though you don’t use it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

You are working as a Senior Solutions Architect in a digital media services startup. Your current project is about a movie streaming app where you are required to launch several EC2 instances on multiple availability zones. Which of the following will configure your load balancer to distribute incoming requests evenly to all EC2 instances across multiple Availability Zones?

A

Cross-Zone Load Balancing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

A Solutions Architect is developing a three-tier cryptocurrency web application for a FinTech startup. The Architect has been instructed to restrict access to the database tier to only accept traffic from the application-tier and deny traffic from other sources. The application-tier is composed of application servers hosted in an Auto Scaling group of EC2 instances. Which of the following options is the MOST suitable solution to implement in this scenario?

  • set up the NACL of the database subnet to deny all inbound non-database traffic from the subnet of the application-tier
  • set up the security group of the database tier to allow database traffic from a specified list of application server IP addresses.
  • set up the security group of the database tier to allow database traffic from the security of the application servers.
  • set up the NACL of the database subnet to allow inbound database traffic from the subnet of the application tier.
A

set up the security group of the database tier to allow database traffic from the security of the application servers.

In the scenario, the servers of the application-tier are in an Auto Scaling group which means that the number of EC2 instances could grow or shrink over time. An Auto Scaling group could also cover one or more Availability Zones (AZ) which have their own subnets. Hence, the most suitable solution would be to set up the security group of the database tier to allow database traffic from the security group of the application servers since you can utilize the security group of the application-tier Auto Scaling group as the source for the security group rule in your database tier.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

You are a Solutions Architect of a tech company. You are having an issue whenever you try to connect to your newly created EC2 instance using a Remote Desktop connection from your computer. Upon checking, you have verified that the instance has a public IP and the Internet gateway and route tables are in place. What else should you do for you to resolve this issue?

  • you should adjust the security group to allow traffic from port 22
  • you should restart the EC2 instance since there might some issue with the instance.
  • you should adjust the security group to allow traffic from port 3389
  • you should create a new instance since there might be some issue with the instance
A

Since you are using a Remote Desktop connection to access your EC2 instance, you have to ensure that the Remote Desktop Protocol is allowed in the security group. By default, the server listens on TCP port 3389 and UDP port 3389.

(the option with port 22 is incorrect as port 22 is used for SSH connections and not RDP)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

A large Philippine-based Business Process Outsourcing company is building a two-tier web application in their VPC to serve dynamic transaction-based content. The data tier is leveraging an Online Transactional Processing (OLTP) database but for the web tier, they are still deciding what service they will use. What AWS services should you leverage to build an elastic and scalable web tier?

  • Amazon RDS with Multi-AZ and Auto Scaling
  • ELB, EC2 and Auto Scaling
  • EC2, DynamoDB and S3
  • ELB, RDS with Multi-AZ and S3
A

Amazon RDS is a suitable database service for online transaction processing (OLTP) applications. However, the question asks for a list of AWS services for the web tier and not the database tier. Also, when it comes to services providing scalability and elasticity for your web tier, Auto Scaling and Elastic Load Balancer should immediately come into mind. Therefore, Elastic Load Balancing, Amazon EC2, and Auto Scaling is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

An application is using a Lambda function to process complex financial data which runs for about 10 to 15 minutes. You noticed that there are a few terminated invocations throughout the day, which caused data discrepancy in the application. Which of the following is the most likely cause of this issue?

  • the failed Lambda invocations contain a “ServiceException” error which means that the AWS Lambda service encountered an internal error
  • the Lambda function contains a recursive code and has been running for over 15 minutes
  • the Failed Lambda functions have been running for over 15 minutes and reached the maximum execution time
  • the concurrent execution limit has been reached
A

the Failed Lambda functions have been running for over 15 minutes and reached the maximum execution time

=> You pay for the AWS resources that are used to run your Lambda function. To prevent your Lambda function from running indefinitely, you specify a timeout. When the specified timeout is reached, AWS Lambda terminates execution of your Lambda function. It is recommended that you set this value based on your expected execution time. The default timeout is 3 seconds and the maximum execution duration per request in AWS Lambda is 900 seconds, which is equivalent to 15 minutes.

Hence, the correct answer is the option that says: The failed Lambda functions have been running for over 15 minutes and reached the maximum execution time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

You are working for a computer animation film studio that has a web application running on an Amazon EC2 instance. It uploads 5 GB video objects to an Amazon S3 bucket. Video uploads are taking longer than expected, which impacts the performance of your application. Which method will help improve the performance of your application?

A

Use S3 Multipart upload API

The main issue is the slow upload time of the video objects to Amazon S3. To address this issue, you can use Multipart upload in S3 to improve the throughput. It allows you to upload parts of your object in parallel thus, decreasing the time it takes to upload big objects. Each part is a contiguous portion of the object’s data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

A global medical research company has a molecular imaging system which provides each client with frequently updated images of what is happening inside the human body at the molecular and cellular level. The system is hosted in AWS and the images are hosted in an S3 bucket behind a CloudFront web distribution. There was a new batch of updated images that were uploaded in S3, however, the users were reporting that they were still seeing the old content. You need to control which image will be returned by the system even when the user has another version cached either locally or behind a corporate caching proxy. Which of the following is the most suitable solution to solve this issue?

  • invalidate the files in your CloudFront web distribution
  • add Cache-Control no-cache, no-store, or private directive in the S3 bucket
  • add a separate cache behavior path for the content and configure a custom object caching with a minimum TTL of 0
  • use versioned objects
A

use versioned objects..

to control the versions of files that are served from your distribution, you can either invalidate files or give them versioned file names. If you want to update your files frequently, AWS recommends that you primarily use file versioning:

  • Versioning enables you to control which file a request returns even when the user has a version cached either locally or behind a corporate caching proxy. If you invalidate the file, the user might continue to see the old version until it expires from those caches.
    • CloudFront access logs include the names of your files, so versioning makes it easier to analyze the results of file changes.
    • Versioning provides a way to serve different versions of files to different users.
    • Versioning simplifies rolling forward and back between file revisions.
    • Versioning is less expensive. You still have to pay for CloudFront to transfer new versions of your files to edge locations, but you don’t have to pay for invalidating files

(Invalidating the files in your CloudFront web distribution is incorrect because even though using invalidation will solve this issue, this solution is more expensive as compared to using versioned objects.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
129
Q

An online shopping platform has been deployed to AWS using Elastic Beanstalk. They simply uploaded their Node.js application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. Since the entire deployment process is automated, the DevOps team is not sure where to get the application log files of their shopping platform. In Elastic Beanstalk, where does it store the application files and server log files?

A

The correct answer is the option that says: Application files are stored in S3. The server log files can also optionally be stored in S3 or in CloudWatch Logs. AWS Elastic Beanstalk stores your application files and optionally, server log files in Amazon S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
130
Q

You are planning to launch an application that tracks the GPS coordinates of delivery trucks in your country. The coordinates are transmitted from each delivery truck every five seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. The aggregated data will be analyzed in a separate reporting application.Which AWS service should you use for this scenario?

A

Amazon Kinesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
131
Q

“You would like to share some documents with public users accessing an S3 bucket over the Internet. What are two valid methods of granting public read permissions so you can share the documents? (choose 2)

  1. Grant public read access to the objects when uploading
  2. Share the documents using CloudFront and a static website
  3. Use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket granting read access to public anonymous users
  4. Grant public read on all objects using the S3 bucket ACL
  5. Share the documents using a bastion host in a public subnet”
A

1,3

“Access policies define access to resources and can be associated with resources (buckets and objects) and users

You can use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket. Bucket policies can be used to grant permissions to objects”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
132
Q

“A Solutions Architect is designing an authentication solution using the AWS STS that will provide temporary, limited-privilege credentials for IAM users or for users that you authenticate (federated users). What supported sources are available to the Architect for users? (choose 2)

  1. OpenID Connect
  2. EC2 instance
  3. Cognito identity pool
  4. Another AWS account
  5. A local user on a user’s PC”
A

1 OpenID Connect + 4.Another AWS account

“The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for IAM users or for users that you authenticate (federated users)

Federation can come from three sources:

  • Federation (typically AD)
  • Federation with Mobile Apps (e.g. Facebook, Amazon, Google or other OpenID providers)
  • Cross account access (another AWS account)

The question has asked for supported sources for users. Cognito user pools contain users, but identity pools do not”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
133
Q

“You are building an application that will collect information about user behavior. The application will rapidly ingest large amounts of dynamic data and requires very low latency. The database must be scalable without incurring downtime. Which database would you recommend for this scenario?

  • RDS with MySQL
  • DynamoDB
  • RedShift
  • RDS with Microsoft SQL”
A

DynamoDB

  • “Amazon Dynamo DB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability
  • Push button scaling means that you can scale the DB at any time without incurring downtime
  • DynamoDB provides low read and write latency”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
134
Q

“An application tier of a multi-tier web application currently hosts two web services on the same set of instances. The web services each listen for traffic on different ports. Which AWS service should a Solutions Architect use to route traffic to the service based on the incoming request path?

  1. Application Load Balancer (ALB)
  2. Amazon Route 53
  3. Classic Load Balancer (CLB)
  4. Amazon CloudFront”
A

ALB…1

“An Application Load Balancer is a type of Elastic Load Balancer that can use layer 7 (HTTP/HTTPS) protocol data to make forwarding decisions. An ALB supports both path-based (e.g. /images or /orders) and host-based routing (e.g. example.com)”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
135
Q

“An application runs on two EC2 instances in private subnets split between two AZs. The application needs to connect to a CRM SaaS application running on the Internet. The vendor of the SaaS application restricts authentication to a whitelist of source IP addresses and only 2 IP addresses can be configured per customer. What is the most appropriate and cost-effective solution to enable authentication to the SaaS application?”

  1. “Use a Network Load Balancer and configure a static IP for each AZ
  2. Use multiple Internet-facing Application Load Balancers with Elastic IP addresses
  3. Configure a NAT Gateway for each AZ with an Elastic IP address
  4. Configure redundant Internet Gateways and update the routing tables for each subnet”
A

3…..NAT Gateway

“A NAT Gateway is created in a specific AZ and can have a single Elastic IP address associated with it. NAT Gateways are deployed in public subnets and the route tables of the private subnets where the EC2 instances reside are configured to forward Internet-bound traffic to the NAT Gateway. You do pay for using a NAT Gateway based on hourly usage and data processing, however this is still a cost-effective solution”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
136
Q

“Your company would like to restrict the ability of most users to change their own passwords whilst continuing to allow a select group of users within specific user groups.” What is the best way to achieve this?

  1. “Under the IAM Password Policy deselect the option to allow users to change their own passwords
  2. Create an IAM Policy that grants users the ability to change their own password and attach it to the groups that contain the users
  3. Create an IAM Role that grants users the ability to change their own password and attach it to the groups that contain the users
  4. Create an IAM Policy that grants users the ability to change their own password and attach it to the individual user accounts
  5. Disable the ability for all users to change their own passwords using the AWS Security Token Service”
A

1,2

137
Q

“An application you are designing receives and processes files. The files are typically around 4GB in size and the application extracts metadata from the files which typically takes a few seconds for each file. The pattern of updates is highly dynamic with times of little activity and then multiple uploads within a short period of time. What architecture will address this workload the most cost efficiently?”

  • “Upload files into an S3 bucket, and use the Amazon S3 event notification to invoke a Lambda function to extract the metadata
  • Place the files in an SQS queue, and use a fleet of EC2 instances to extract the metadata
  • Store the file in an EBS volume which can then be accessed by another EC2 instance for processing
  • Use a Kinesis data stream to store the file, and use Lambda for processing”
A

“Storing the file in an S3 bucket is cost-efficient, and using S3 event notifications to invoke a Lambda function works well for this unpredictable workload and is cost-efficient”

“SQS queues have a maximum message size of 256KB. You can use the extended client library for Java to use pointers to a payload on S3 but the maximum payload size is 2GB”

138
Q

“A colleague from your company’s IT Security team has notified you of an Internet-based threat that affects a certain port and protocol combination. You have conducted an audit of your VPC and found that this port and protocol combination is allowed on an Inbound Rule with a source of 0.0.0.0/0. You have verified that this rule only exists for maintenance purposes and need to make an urgent change to block the access. What is the fastest way to block access from the Internet to the specific ports and protocols?

  1. You don’t need to do anything; this rule will only allow access to VPC based resources
  2. Update the security group by removing the rule”
  3. “Delete the security group
  4. Add a deny rule to the security group with a higher priority”
A

2…

  • security group membership can be changed whilst instances are running
  • any changes to security groups will take effect immediately
  • you can only assign permit rules in a security group, you cannot assign deny rules
139
Q

“You are a Solutions Architect at Digital Cloud Training. One of your clients has requested that you design a solution for distributing load across a number of EC2 instances across multiple AZs within a region. Customers will connect to several different applications running on the client’s servers through their browser using multiple domain names and SSL certificates. The certificates are stored in AWS Certificate Manager (ACM).What is the optimal architecture to ensure high availability, cost effectiveness, and performance?

  1. Launch a single ALB and bind multiple SSL certificates to multiple secure listeners
  2. Launch a single ALB and bind multiple SSL certificates to the same secure listener. Clients will use the Server Name Indication (SNI) extension
  3. Launch multiple ALBs and bind separate SSL certificates to each ELB
  4. Launch a single ALB, configure host-based routing for the domain names and bind an SSL certificate to each routing rule”
A

2 Launch a single ALB and bind multiple SSL certificates to the same secure listener. Clients will use the Server Name Indication (SNI) extension

“You can use a single ALB and bind multiple SSL certificates to the same listener

With Server Name Indication (SNI) a client indicates the hostname to connect to. SNI supports multiple secure websites using a single secure listener”

140
Q

“A Linux instance running in your VPC requires some configuration changes to be implemented locally and you need to run some commands. Which of the following can be used to securely connect to the instance?

  1. EC2 password
  2. Key Pairs
  3. Public key
  4. SSL/TLS certificate”
A

A key pair consist of a public key that AWS stores and a private key file that you store

For Linux AMIs, the private key file allows you to securely SSH into your instance

141
Q

“One of your EC2 instances runs an application process that saves user data to an attached EBS volume. The EBS volume was attached to the EC2 instance after it was launched and is unencrypted. You would like to encrypt the data that is stored on the volume as it is considered sensitive however you cannot shutdown the instance due to other application processes that are running.What is the best method of applying encryption to the sensitive data without any downtime?

  1. Create an encrypted snapshot of the current EBS volume. Restore the snapshot to the EBS volume
  2. Create and mount a new encrypted EBS volume. Move the data to the new volume and then delete the old volume
  3. Unmount the volume and enable server-side encryption. Re-mount the EBS volume
  4. Leverage the AWS Encryption CLI to encrypt the data on the volume”
A

2…

“Either create an encrypted volume and copy data to it or take a snapshot, encrypt it, and create a new encrypted volume from the snapshot”

142
Q

“The website for a new application received around 50,000 requests each second and the company wants to use multiple applications to analyze the navigation patterns of the users on their website so they can personalize the user experience.

What can a Solutions Architect use to collect page clicks for the website and process them sequentially for each user?

  1. Amazon SQS standard queue
  2. Amazon SQS FIFO queue
  3. Amazon Kinesis Streams
  4. AWS CloudTrail trail”
A

3.. Kinesis Streams

“This is a good use case for Amazon Kinesis streams as it is able to scale to the required load, allow multiple applications to access the records and process them sequentially

Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications”

143
Q

“A customer has asked you to recommend the best solution for a highly available database. The database is a relational OLTP type of database and the customer does not want to manage the operating system the database runs on. Failover between AZs must be automatic. Which of the below options would you suggest to the customer?

  1. Use DynamoDB
  2. Use RDS in a Multi-AZ configuration
  3. Install a relational database on EC2 instances in multiple AZs and create a cluster
  4. Use RedShift in a Multi-AZ configuration”
A

2.. Use RDS in a Multi-AZ configuration

“Amazon Relational Database Service (Amazon RDS) is a managed service that makes it easy to set up, operate, and scale a relational database in the cloud. With RDS you can configure Multi-AZ which creates a replica in another AZ and synchronously replicates to it (DR only)”

144
Q

“You are troubleshooting a connectivity issue where you cannot connect to an EC2 instance in a public subnet in your VPC from the Internet. Which of the configuration items in the list below would you check first? (choose 2)

  1. The subnet has “Auto-assign public IPv4 address” set to “Yes”
  2. There is a NAT Gateway installed in the subnet
  3. The subnet route table has an attached NAT Gateway
  4. The security group attached to the EC2 instance has an inbound rule allowing the traffic
  5. The EC2 instance has a private IP address associated with it”
A

1,4

“Public subnets are subnets that have:

“Auto-assign public IPv4 address” set to “Yes” which will assign a public IP

The subnet route table has an attached Internet Gateway

The instance will also need to a security group with an inbound rule allowing the traffic”

145
Q

“You would like to provide some on-demand and live streaming video to your customers. The plan is to provide the users with both the media player and the media files from the AWS cloud. One of the features you need is for the content of the media files to begin playing while the file is still being downloaded. What AWS services can deliver these requirements? (choose 2)

  1. Use CloudFront with a Web and RTMP distribution
  2. Use CloudFront with an RTMP distribution
  3. Store the media files on an EC2 instance
  4. Store the media files in an S3 bucket
  5. Store the media files on an EBS volume”
A

1,4…

“For serving both the media player and media files you need two types of distributions:

  • A web distribution for the media player
  • An RTMP distribution for the media files

RTMP:

  • Distribute streaming media files using Adobe Flash Media Server’s RTMP protocol
  • Allows an end user to begin playing a media file before the file has finished downloading from a CloudFront edge location
  • Files must be stored in an S3 bucket (not an EBS volume or EC2 instance)”
146
Q

“There is a new requirement to implement in-memory caching for a Financial Services application due to increasing read-heavy load. The data must be stored persistently. ” “Automatic failover across AZs is also required. Which two items from the list below are required to deliver these requirements? (choose 2)”

  1. “ElastiCache with the Redis engine
  2. ElastiCache with the Memcached engine
  3. Read replica with failover mode enabled
  4. Multi-AZ with Cluster mode and Automatic Failover enabled
  5. Multiple nodes placed in different AZs”
A

1,4

  • Redis:
    • “Redis engine stores data persistently
    • Redis engine supports Multi-AZ using read replicas in another AZ in the same region”
  • Memached:
    • Memcached engine does not store data persistently
    • Memcached does not support multi-aZ failover or replication
147
Q

“A Solutions Architect is designing a data archive strategy using Amazon Glacier. The Architect needs to explain the features of the service to his manager, which statements about Glacier are correct? (choose 2)

  1. Glacier objects are visible through the Glacier console
  2. Glacier objects are visible through S3 only
  3. The contents of an archive can be modified after uploading
  4. Uploading archives is synchronous; downloading archives is asynchronous
  5. Retrieval is immediate”
A

2,4

  • “Glacier objects are visible through S3 only (not Glacier directly)
  • The contents of an archive that has been uploaded cannot be modified
  • Uploading archives is synchronous
  • Downloading archives is asynchronous
  • Retrieval can take a few hours”
148
Q

“A Solutions Architect is developing a mobile web app that will provide access to health related data. The web apps will be tested on Android and iOS devices. The Architect needs to run tests on multiple devices simultaneously and to be able to reproduce issues, and record logs and performance data to ensure quality before release.What AWS service can be used for these requirements?

  1. AWS Cognito
  2. AWS Device Farm
  3. AWS Workspaces
  4. Amazon Appstream 2.0”
A
  1. Device Farm

“AWS Device Farm is an app testing service that lets you test and interact with your Android, iOS, and web apps on many devices at once, or reproduce issues on a device in real time”

149
Q

“The association between a poll-based source and a Lambda function is called the event source mapping. Event sources maintain the mapping configuration except for stream-based services such as ________ and ________ for which the configuration is made on the Lambda side and Lambda performs the polling. Fill in the blanks from the options below (choose 2)

  1. DynamoDB
  2. S3
  3. IoT Button
  4. Kinesis
  5. API Gateway”
A

1,4

“This question is really just asking you to identify which of the listed services are stream-based services. DynamoDB and Kinesis are both used for streaming data”

150
Q

“The data scientists in your company are looking for a service that can process and analyze real-time, streaming data. They would like to use standard SQL queries to query the streaming data. Which combination of AWS services would deliver these requirements?

  1. DynamoDB and EMR
  2. Kinesis Data Streams and Kinesis Data Analytics
  3. ElastiCache and EMR”
  4. Kinesis Data Streams and Kinesis Firehose
A

2…

“Amazon Kinesis Data Analytics is the easiest way to process and analyze real-time, streaming data. Kinesis Data Analytics can use standard SQL queries to process Kinesis data streams and can ingest data from Kinesis Streams and Kinesis Firehose but Firehose cannot be used for running SQL queries”

151
Q

“You are a Solutions Architect at a media company and you need to build an application stack that can receive customer comments from sporting events. The application is expected to receive significant load that could scale to millions of messages within a short space of time following high-profile matches. As you are unsure of the load required for the database layer what is the most cost-effective way to ensure that the messages are not dropped?

  1. Use RDS Auto Scaling for the database layer which will automatically scale as required
  2. Create an SQS queue and modify the application to write to the SQS queue. Launch another application instance the polls the queue and writes messages to the database
  3. Write the data to an S3 bucket, configure RDS to poll the bucket for new messages
  4. Use DynamoDB and provision enough write capacity to handle the highest expected load
A

2

152
Q

“You are a Solutions Architect at Digital Cloud Training. A large multi-national client has requested a design for a multi-region, multi-master database. The client has requested that the database be designed for fast, massively scaled applications for a global user base. The database should be a fully managed service including the replication.Which AWS service can deliver these requirements?

  1. RDS with Multi-AZ
  2. S3 with Cross Region Replication”
  3. “DynamoDB with Global Tables and Cross Region Replication
  4. EC2 instances with EBS replication”
A

3…

“Cross-region replication allows you to replicate across regions:

  • Amazon DynamoDB global tables provides a fully managed solution for deploying a multi-region, multi-master database”
153
Q

“The application development team in your company has a new requirement for the deployment of a container solution. You plan to use the AWS Elastic Container Service (ECS). The solution should include load balancing of incoming requests across the ECS containers and allow the containers to use dynamic host port mapping so that multiple tasks from the same service can run on the same container host.Which AWS load balancing configuration will support this?

  1. Use an Application Load Balancer (ALB) and map the ECS service to the ALB
  2. Use a Classic Load Balancer (CLB) and create a static mapping of the ports”
  3. “Use a Network Load Balancer (NLB) and host-based routing
  4. You cannot run multiple copies of a task on the same instance, because the ports would conflict”
A

1….ALB

“An Application Load Balancer allows dynamic port mapping. You can have multiple tasks from a single service on the same container instance.”

154
Q

“To improve security in your AWS account you have decided to enable multi-factor authentication (MFA). You can authenticate using an MFA device in which two ways? (choose 2)

  1. Locally to EC2 instances
  2. Through the AWS Management Console
  3. Using biometrics
  4. Using a key pair
  5. Using the AWS API”
A

2…Management Console + 5. AWS API

“You can authenticate using an MFA device in the following ways:

  • Through the AWS Management Console – the user is prompted for a user name, password and authentication code”
  • “Using the AWS API – restrictions are added to IAM policies and developers can request temporary security credentials and pass MFA parameters in their AWS STS API requests
  • Using the AWS CLI by obtaining temporary security credentials from STS (aws sts get-session-token)”
155
Q

“An application that was recently moved into the AWS cloud has been experiencing some authentication issues. The application is currently configured to authenticate to an on-premise Microsoft Active Directory Domain Controller via a VPN connection. Upon troubleshooting the issues, it seems that latency across the VPN connection is causing authentication to fail. Your company is very cost sensitive at the moment and the administrators of the Microsoft AD do not want to manage any additional directories. You need to resolve the issues quickly. What is the best solution to solve the authentication issues taking cost considerations into account?

  • Create an AWS Direct Connect connection to reduce the latency between your company and AWS
  • Use the AWS Active Directory Service for Microsoft Active Directory and join your existing on-premise domain
  • Install an additional Microsoft Active Directory Domain Controller for your existing domain on EC2 and configure the application to authenticate to the local DC
  • Use the AWS Active Directory Service for Microsoft Active Directory and create a new domain. Establish a trust relationship with your existing on-premise domain”
A

3… Install an additional Microsoft Active Directory Domain Controller for your existing domain on EC2 and configure the application to authenticate to the local DC

“The best answer is to Install an additional Microsoft Active Directory Domain Controller for your existing domain on EC2:

  • When you build your own you can join an existing on-premise Active Directory domain/directory (replication mode)”
156
Q

“You are designing an identity, authorization and access management solution for the AWS cloud. The features you need include the ability to manage user accounts and group memberships, create and apply group policies, securely connect to Amazon EC2 instances, and provide Kerberos-based single sign-on (SSO). You do not need to establish trust relationships with other domains, use DNS dynamic update, implement schema extensions or use other advanced directory features. What would be the most cost-effective solution?

  1. Use AWS Simple AD
  2. Use AWS Directory Service for Microsoft AD
  3. Use Amazon Cloud Directory
  4. Use AD Connector”
A

AWS Simple AD

“AWS Simple AD is an inexpensive Active Directory-compatible service with common directory features. It is a standalone, fully managed, directory “on the AWS cloud. Simple AD is generally the least expensive option and the best choice for less than 50000 users and don’t need advanced AD features. It is powered by SAMBA 4 Active Directory compatible server”

157
Q

“For operational access to your AWS environment you are planning to setup a bastion host implementation. Which of the below are AWS best practices for setting up bastion hosts? (choose 2)

  1. Deploy in 2 AZs and use an Auto Scaling group to ensure that the number of bastion host instances always matches the desired capacity you specify during launch
  2. Bastion hosts are deployed in the private subnets of the VPC
  3. Elastic IP addresses are associated with the bastion instances to make it easier to remember and allow these IP addresses from on-premises firewalls
  4. Access to the bastion hosts is configured to 0.0.0.0/0 for ingress in security groups”
  5. “Ports are unrestricted to allow full operational access to the bastion hosts
A

1,3..

158
Q

“An application running on an external website is attempting to initiate a request to your company’s website on AWS using API calls. A problem has been reported in which the requests are failing with an error that includes the following text: “Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource” You have been asked to resolve the problem, what is the most likely solution?

  1. The IAM policy does not allow access to the API
  2. The ACL on the API needs to be updated
  3. Enable CORS on the APIs resources using the “selected methods under the API Gateway
  4. The request is not secured with SSL/TLS”
A

3…

159
Q

“You are an entrepreneur building a small company with some resources running on AWS. As you have limited funding you are extremely cost conscious. What AWS service can help you to ensure your costs do not exceed your funding capacity and send you alerts via email or SNS topic?

  • Cost Explorer
  • AWS Budgets
  • AWS Billing Dashboard
  • Cost & Usage reports”
A

2… AWS Budgets

“AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. Budget alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic”

160
Q

“A company is in the process of deploying an Amazon Elastic Map Reduce (EMR) cluster. Which of the statements below accurately describe the EMR service? (choose 2)

  1. EMR utilizes a hosted Hadoop framework running on Amazon EC2 and Amazon S3
  2. EMR makes it easy to securely stream video from connected devices to AWS for analytics, machine learning (ML), and other processing
  3. EMR launches all nodes for a given cluster in the same Amazon EC2 Availability Zone
  4. EMR clusters span availability zones providing redundancy
  5. EMR is a fully-managed service that makes it easy to set up and scale file storage in the Amazon Cloud”
A

1, 3

“Amazon EMR is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. EMR utilizes a hosted Hadoop framework running on Amazon EC2 and Amazon S3. EMR uses Apache Hadoop as its distributed data processing engine which is an open source, Java soft”“ware framework that supports data-intensive distributed applications running on large clusters of commodity hardware

EMR launches all nodes for a given cluster in the same Amazon EC2 Availability Zone”

161
Q

“As a SysOps engineer working at Digital Cloud Training, you are constantly trying to improve your processes for collecting log data. Currently you are collecting logs from across your AWS resources using CloudWatch and a combination of standard and custom metrics. You are currently investigating how you can optimize the storage of log files collected by CloudWatch. Which of the following are valid options for storing CloudWatch log files? (choose 2)

  1. CloudWatch Logs
  2. RedShift
  3. EFS
  4. Splunk
  5. EBS”
A

1,4…

“Options for storing logs:

    • CloudWatch Logs
    • Centralized logging system (e.g. Splunk)
    • Custom script and store on S3”
162
Q

“Your company uses Amazon Glacier to store files that must be retained for compliance reasons and are rarely accessed. An auditor has requested access to some information that is stored in a Glacier archive. You have initiated an archive retrieval job.Which factors are important to know about the process from this point? (choose 2)

  1. An MFA device is required to access the files
  2. There is a charge if you delete data within 90 days
  3. Following retrieval, you have 24 hours to download your data
  4. Amazon Glacier must complete a job before you can get its output
  5. The retrieved data will always be encrypted”
A

3.. you have 24 hours to download data after retrieval (24h is default and can be changed)+ 4..Glacier must complete a job before you can get its output

163
Q

“A company is considering using EC2 Reserved Instances to reduce cost. The Architect involved is concerned about the potential limitations in flexibility of using RIs instead of On-Demand instances.” “Which of the following statements about RIs are useful to the Architect? (choose 2)

  1. RIs can be sold on the Reserved Instance Marketplace
  2. You can change the region with Convertible RIs
  3. There is a fee charged for any RI modifications
  4. You cannot launch RIs using Auto Scaling Groups
  5. You can use RIs in Placement Groups”
A

1,5..

can be sold on RI marketplace

can be used in placement groups

can also be used in auto scaling groups

can also change instance size within same instance type

can also switch AZ within same region

164
Q

“An AWS user has created a Provisioned IOPS EBS volume which is attached to an EBS optimized instance and configured 1000 IOPS. Based on the EC2 SLA, what is the average IOPS the user will achieve for most of the year?”

  • 1000
  • 950
  • 990
  • 900
A

“Unlike gp2, which uses a bucket and credit model to calculate performance, an io1 volume allows you to specify a consistent IOPS rate when you create the volume, and Amazon EBS delivers within 10 percent of the provisioned IOPS performance 99.9 percent of the time over a given year. Therefore you should expect to get 900 IOPS most of the year”

165
Q

“Several websites you run on AWS use multiple Internet-facing Elastic Load Balancers (ELB) to distribute incoming connections to EC2 instances running web applications. The ELBs are configured to forward using either TCP (layer 4) or HTTP (layer 7) protocols. You would like to start recording the IP addresses of the clients that connect to your web applications. Which ELB features will you implement with which protocols? (choose 2)

  1. X-Forwarded-For request header and TCP
  2. X-Forwarded-For request header for TCP and HTTP
  3. X-Forwarded-For request header and HTTP
  4. Proxy Protocol and TCP
  5. Proxy Protocol and HTTP”
A

3,4

“Proxy protocol for TCP/SSL carries the source (client) IP/port information

X-forwarded-for for HTTP/HTTPS carries the source IP/port information

In both cases the protocol carries the source IP/port information right through to the web server. If you were happy to just record the source connections on the load balancer you could use access logs”

166
Q

What is X-forwarded for requests?

A

The X-Forwarded-For (XFF) header is a de-facto standard header for identifying the originating IP address of a client connecting to a web server through an HTTP proxy or a load balancer.

167
Q

“Your company has offices in several locations around the world. Each office utilizes resources deployed in the geographically closest AWS region. You would like to implement connectivity between all of the VPCs so that you can provide full access to each other’s resources. As you are security conscious you would like to ensure the traffic is encrypted and does not traverse the public Internet. The topology should be many-to-many to enable all VPCs to access the resources in all other VPCs. How can you successfully implement this connectivity using only AWS services? (choose 2)

  1. Use software VPN appliances running on EC2 instances
  2. Use VPC endpoints between VPCs
  3. Use inter-region VPC peering
  4. Implement a fully meshed architecture
  5. Implement a hub and spoke architecture”
A

3, 4..

“You cannot do transitive peering so a hub and spoke architecture would not allow all VPCs to communicate directly with each other. For this you need to establish a mesh topology”

168
Q

“You are undertaking a project to make some audio and video files that your company uses for onboarding new staff members available via a mobile application. You are looking for a cost-effective way to convert the files from their current formats into formats that are compatible with smartphones and tablets. The files are currently stored in an S3 bucket. What AWS service can help with converting the files?

  1. MediaConvert
  2. Data Pipeline
  3. Elastic Transcoder
  4. Rekognition”
A

Elastic Transcoder

“Amazon Elastic Transcoder is a highly scalable, easy to use and cost-effective way for developers and businesses to convert (or “transcode”) video and audio files from their source format into versions that will playback on devices like smartphones, tablets and PCs”

169
Q

“A company uses CloudFront to provide low-latency access to cached files. An Architect is considering the implications of using CloudFront Regional Edge Caches. Which statements are correct in relation to this service? (choose 2)

  1. Regional Edge Caches are enabled by default for CloudFront Distributions
  2. There are additional charges for using Regional Edge Caches
  3. Regional Edge Caches have larger cache-width than any individual edge location, so your objects remain in cache longer at these locations
  4. Regional Edge Caches are read-only
  5. Distributions must be updated to use Regional Edge Caches”
A

1, 3

“Regional Edge Caches are located between origin web servers and global edge locations and have a larger cache than any individual edge location, so your objects remain in cache longer at these locations.

Regional Edge caches aim to get content closer to users and are enabled by default for CloudFront Distributions (so you don’t need to update your distributions)

There are no additional charges for using Regional Edge Caches”

170
Q

“The company you work for has a presence across multiple AWS regions. As part of disaster recovery planning you are formulating a solution to provide a regional DR capability for an application running on a fleet of Amazon EC2 instances that are provisioned by an Auto Scaling Group (ASG). The applications are stateless and read and write ”“data to an S3 bucket. You would like to utilize the current AMI used by the ASG as it has some customizations made to it. What are the steps you might take to enable a regional DR capability for this application? (choose 2)

  1. Enable cross region replication on the S3 bucket and specify a destination bucket in the DR region
  2. Enable multi-AZ for the S3 bucket to enable synchronous replication to the DR region
  3. Modify the permissions of the AMI so it can be used across multiple regions
  4. Copy the AMI to the DR region and create a new launch configuration for the ASG that uses the AMI
  5. Modify the launch configuration for the ASG in the DR region and specify the AMI”
A

1,4.

“There are two parts to this solution. First you need to copy the S3 data to each region (as the instances are stateless), then you need to be able to deploy instances from an ASG using the same AMI in each regions.

  • CRR is an Amazon S3 feature that automatically replicates data across AWS Regions. With CRR, every object uploaded to an S3 bucket is automatically replicated to a destination bucket in a different AWS Region that you choose, this enables you to copy the existing data across to each region
  • AMIs of both Amazon EBS-backed AMIs and instance store-backed AMIs can be copied between regions. You can then use the copied AMI to create a new launch configuration (remember that you cannot modify an ASG launch configuration, you must create a new launch configuration)”
171
Q

“An application hosted in your VPC uses an EC2 instance with a MySQL DB running on it. The database uses a single 1TB General Purpose SSD (GP2) EBS volume. Recently it has been noticed that the database is not performing well, and you need to improve the read performance. What are two possible ways this can be achieved? (choose 2)

  1. Add multiple EBS volumes in a RAID 1 array
  2. Add multiple EBS volumes in a RAID 0 array
  3. Add an RDS read replica in another AZ
  4. Use a provisioned IOPS volume and specify the number of I/O operations required
  5. Create an active/passive cluster using MySQL”
A

2,4

“RAID 0 = 0 striping – data is written across multiple disks and increases performance but no redundancy

SSD, Provisioned IOPS – I01 provides higher performance than General Purpose SSD (GP2) and you can specify the IOPS required up to 50 IOPS per GB and a maximum of 32000 IOPS”

172
Q

What is the difference between RAID 0 and RAID 1?

A
  • “RAID 0 = 0 striping – data is written across multiple disks and increases performance but no redundancy
  • RAID 1 = 1 mirroring – creates 2 copies of the data but does not increase performance, only redundancy”
173
Q

“Your company is reviewing their information security processes. One of the items that came out of a recent audit is that there is insufficient data recorded about requests made to a few S3 buckets. The security team requires an audit trail for operations on the S3 buckets that includes the requester, bucket name, request time, request action, and response status. “Which action would you take to enable this logging?

  1. Create a CloudTrail trail that audits S3 bucket operations
  2. Enable S3 event notifications for the specific actions and setup an SNS notification
  3. Enable server access logging for the S3 buckets to save access logs to a specified destination bucket
  4. Create a CloudWatch metric that monitors the S3 bucket operations and triggers an alarm”
A

3…

“Server access logging provides detailed records for the requests that are made to a bucket. To track requests for access to your bucket, you can enable server access logging. Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and an error code, if relevant”

(“CloudWatch metrics do not include the bucket operations specified in the question)

174
Q

“An application you manage uses Auto Scaling and a fleet of EC2 instances. You recently noticed that Auto Scaling is scaling the number of instances up and down multiple times in the same hour. You need to implement a remediation to reduce the amount of scaling events. The remediation must be cost-effective and preserve elasticity. What design changes would you implement? (choose 2)”

  1. “Modify the Auto Scaling group cool-down timers
  2. Modify the Auto Scaling group termination policy to terminate the oldest instance first
  3. Modify the Auto Scaling group termination policy to terminate the newest instance first
  4. Modify the CloudWatch alarm period that triggers your Auto Scaling scale down policy
  5. Modify the Auto Scaling policy to use scheduled scaling actions”
A

1,4

“The cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that it doesn’t launch or terminate additional instances before the previous scaling activity takes effect so this would help. After the Auto Scaling group dynamically scales using a simple scaling policy, it waits for the cooldown period to complete before resuming scaling activities

The CloudWatch Alarm Evaluation Period is the number of the most recent data points to evaluate when determining alarm state. This would help as you can increase the number of datapoints required to trigger an alarm”

175
Q

“A colleague has asked you some questions about how AWS charge for DynamoDB. He is interested in knowing what type of workload DynamoDB is best suited for in relation to cost and how AWS charges for DynamoDB? (choose 2)

  1. DynamoDB is more cost effective for read heavy workloads”
  2. “DynamoDB is more cost effective for write heavy workloads
  3. Priced based on provisioned throughput (read/write) regardless of whether you use it or not
  4. DynamoDB scales automatically and you are charged for what you use
  5. You provision for expected throughput but are only charged for what you use”
A

1,3

“DynamoDB charges:

  • DynamoDB is more cost effective for read heavy workloads

it is priced based on provisioned throughput (read/write) regardless of whether you use it or not

NOTE: With the DynamoDB Auto Scaling feature you can now have DynamoDB dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. However, this is relatively new and may not yet feature on the exam. See the link below for more details”

176
Q

“A Solutions Architect is responsible for a web application that runs on EC2 instances that sit behind an Application Load Balancer (ALB). Auto Scaling is used to launch instances across 3 Availability Zones. The web application serves large image files and these are stored on an Amazon EFS file system. Users have experienced delays in retrieving the files and the Architect has been asked to improve the user experience. What should the Architect do to improve user experience?

  1. Move the digital assets to EBS
  2. Reduce the file size of the images
  3. Cache static content using CloudFront
  4. Use Spot instances”
A

3…Cache content using CloudFront

177
Q

“You are a Solutions Architect at Digital Cloud Training. One of your clients runs an application that writes data to a DynamoDB table. The client has asked how they can implement a function that runs code in response to item level changes that take place in the DynamoDB table. What would you suggest to the client?

  1. Enable server access logging and create an event source mapping between AWS Lambda and the S3 bucket to which the logs are written
  2. Enable DynamoDB Streams and create an event source mapping between AWS Lambda and the relevant stream
  3. Create a local secondary index that records item level changes and write some custom code that responds to updates to the index
  4. Use Kinesis Data Streams and configure DynamoDB as a producer”
A

2.. enable DynamoDB Streams and create an event source mapping between AWS Lambda and the relevant stream

178
Q

“Your company is starting to use AWS to host new web-based applications. A new two-tier application will be deployed that provides customers with access to data records. It is important that the application is highly responsive and retrieval times are optimized. You’re looking for a persistent data store that can provide the required performance. From the list below what AWS service would you recommend for this requirement?

  1. ElastiCache with the Memcached engine
  2. ElastiCache with the Redis engine
  3. Kinesis Data Streams
  4. RDS in a multi-AZ configuration”
A
  1. REDIS

“Redis

  • Data is persistent
  • Can be used as a datastore
  • Not multi-threaded
  • Scales by adding shards, not nodes”
179
Q

“You are a Solutions Architect at Digital Cloud Training. A client from a large multinational corporation is working on a deployment of a significant amount of resources into AWS. The client would like to be able to deploy resources across multiple AWS accounts and regions using a single toolset and template. You have been asked to suggest a toolset that can provide this functionality?

  1. Use a CloudFormation template that creates a stack and specify the logical IDs of each account and region
  2. Use a CloudFormation StackSet and specify the target accounts and regions in which the stacks will be created
  3. Use a third-party product such as Terraform that has support for multiple AWS accounts and regions
  4. This cannot be done, use separate CloudFormation template per AWS account and region
A

2… “AWS CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation”. “Before you can use a stack set to create stacks in a target account, you must set up a trust relationship between the administrator and target accounts”

180
Q

“Your client is looking for a fully managed directory service in the AWS cloud. The service should provide an inexpensive Active Directory-compatible service with common directory features. The client is a medium-sized organization with 4000 users. As the client has a very limited budget it is important to select a cost-effective solution.What would you suggest?

  1. AWS Active Directory Service for Microsoft Active Directory
  2. AWS Simple AD”
  3. Amazon Cognito
  4. AWS Single Sign-On
A
  1. Simple AD

“Simple AD is an inexpensive Active Directory-compatible service with common directory features. It is a standalone, fully managed, directory on the AWS cloud and is generally the least expensive option. It is the best choice for less than 5000 users and when you don’t need advanced AD features”

181
Q

“You have been asked to implement a solution for capturing, transforming and loading streaming data into an Amazon RedShift cluster. The solution will capture data from Amazon Kinesis Data Streams. Which AWS services would you utilize in this scenario? (choose 2)

  1. Kinesis Data Firehose for capturing the data and loading it into RedShift
  2. Kinesis Video Streams for capturing the data and loading it into RedShift
  3. EMR for transforming the data
  4. AWS Data Pipeline for transforming the data
  5. Lambda for transforming the data”
A

1,5. “For this solution Kinesis Data Firehose can be used as it can use Kinesis Data Streams as a source and can capture, transform, and load streaming data into a RedShift cluster. Kinesis Data Firehose can invoke a Lambda function to transform data before delivering it to destinations”

182
Q

“You are creating a design for a web-based application that will be based on a web front-end using EC2 instances and a database back-end. This application is a low priority and you do not want to incur costs in general day to day management. Which AWS database service can you use that will require the least operational overhead?

  1. RDS
  2. RedShift
  3. EMR
  4. DynamoDB”
A

DynamoDB

“Out of the options in the list, DynamoDB requires the least operational overhead as there are no backups, maintenance periods, software updates etc. to deal with”

183
Q

“A new Big Data application you are developing will use hundreds of EC2 instances to write data to a shared file system. The file system must be stored redundantly across multiple AZs within a region and allow the EC2 instances to concurrently access the file system. The required throughput is multiple GB per second.From the options presented which storage solution can deliver these requirements?

  1. Amazon EBS using multiple volumes in a RAID 0 configuration
  2. Amazon EFS
  3. Amazon S3
  4. Amazon Storage Gateway”
A

2..EFS.. “Amazon EFS is the best solution as it is the only solution that is a file-level storage solution (not block/object-based), stores data redundantly across multiple AZs within a region and you can concurrently connect up to thousands of EC2 instances to a single filesystem”

184
Q

“A company has deployed Amazon RedShift for performing analytics on user data. When using Amazon RedShift, which of the following statements are correct in relation to availability and durability? (choose 2)

  1. RedShift always keeps three copies of your data
  2. Single-node clusters support data replication
  3. “RedShift provides continuous/incremental backups
  4. RedShift always keeps five copies of your data
  5. Manual backups are automatically deleted when you delete a cluster”
A

1,3

“RedShift always keeps three copies of your data and provides continuous/incremental backups

185
Q

What is AWS Glue?

A

AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. You can create and run an ETL job with a few clicks in the AWS Management Console.

186
Q

“A Solutions Architect is developing an encryption solution. The solution requires that data keys are encrypted using envelope protection before they are written to disk. Which solution option can assist with this requirement?

  1. AWS KMS API
  2. AWS Certificate Manager
  3. API Gateway with STS
  4. IAM Access Key”
A
  1. AWS KMS API

“The AWS KMS API can be used for encrypting data keys (envelope encryption)”

187
Q

“You are planning to launch a RedShift cluster for processing and analyzing a large amount of data. The RedShift cluster will be deployed into a VPC with multiple subnets. Which construct is used when provisioning the cluster to allow you to specify a set of subnets in the VPC that the cluster will be deployed into?

  1. DB Subnet Group
  2. Subnet Group
  3. Availability Zone (AZ)
  4. Cluster Subnet Group”
A
  1. Cluster Subnet Group

“A cluster subnet group allows you to specify a set of subnets in your VPC”

188
Q

“There is a temporary need to share some video files that are stored in a private S3 bucket. The consumers do not have AWS accounts and you need to ensure that only authorized consumers can access the files. What is the best way to enable this access?”

A

Generate a pre-signed URL and distribute it to the consumers..

“S3 pre-signed URLs can be used to provide temporary access to a specific object to those who do not have AWS credentials.”

189
Q

“A Solutions Architect has been asked to suggest a solution for analyzing data in S3 using standard SQL queries. The solution should use a serverless technology. Which AWS service can the Architect use?

  1. Amazon Athena
  2. Amazon RedShift
  3. AWS Glue
  4. AWS Data Pipeline”
A

Athena.. “Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run”

190
Q

“A Solutions Architect is deploying an Auto Scaling Group (ASG) and needs to determine what CloudWatch monitoring option to use. Which of the statements below would assist the Architect in making his decision? (choose 2)

  1. Basic monitoring is enabled by default if the ASG is created from the console
  2. Detailed monitoring is enabled by default if the ASG is created from the CLI
  3. Basic monitoring is enabled by default if the ASG is created from the CLI
  4. Detailed monitoring is chargeable and must always be manually enabled
  5. Detailed monitoring is free and can be manually enabled”
A

1,2

“When the launch configuration is created from the CLI detailed monitoring of EC2 instances is enabled by default”

191
Q

“An EC2 instance that you manage has an IAM role attached to it that provides it with access to Amazon S3 for saving log data to a bucket. A change in the application architecture means that you now need to provide the additional ability for the application to securely make API requests to Amazon API Gateway. Which two methods could you use to resolve this challenge? (choose 2)

  1. Delegate access to the EC2 instance from the API Gateway management console
  2. Create an IAM role with a policy granting permissions to Amazon API Gateway and add it to the EC2 instance as an additional IAM role
  3. You cannot modify the IAM role assigned to an EC2 instance after it has been launched. You’ll need to recreate the EC2 instance and assign a new IAM role
  4. Create a new IAM role with multiple IAM policies attached that grants access to Amazon S3 and Amazon API Gateway, and replace the existing IAM role that is attached to the EC2 instance
  5. Add an IAM policy to the existing IAM role that the EC2 instance is using granting permissions to access Amazon API Gateway”
A

4, 5

“There are two possible solutions here. In one you create a new IAM role with multiple policies, in the other you add a new policy to the existing IAM role.

Contrary to one of the incorrect answers, you can modify IAM roles after an instance has been launched - this was changed quite ”“some time ago now. However, you cannot add multiple IAM roles to a single EC2 instance. If you need to attach multiple policies you must attach them to a single IAM role. There is no such thing as delegating access using the API Gateway management console”

“You are using an Application Load Balancer (ALB) for distributing traffic for a number of application servers running on EC2 instances. The configuration consists of a single ALB with a single target group. The front-end listeners are receiving traffic for digitalcloud.training on port 443 (SSL/TLS) and the back-end listeners are receiving traffic on port 80 (HTTP).

192
Q

You will be installing a new application component on one of the application servers in the existing target group that will process data sent to digitalcloud.training/orders. The application component will listen on HTTP port 8080 for this traffic. What configuration changes do you need to make to implement this solution update? (choose 2)

  1. Create a new target group and add the EC2 instance to it. Define the protocol as HTTP and the port as 8080
  2. Add an additional port to the existing target group and set it to 8080
  3. Add a new rule to the existing front-end listener with a Path condition. Set the path condition to /orders and add an action that forwards traffic to the new target group
  4. “Add a new rule to the existing front-end listener with a Host condition. Set the host condition to /orders and add an action that forwards traffic to the new target group
  5. Add an additional front-end listener that listens on port 443 and set a path condition to process traffic destined to the path /orders”
A

1,3

“The traffic is coming in on standard ports (443/HTTPS, 80/HTTP) to a single front-end listener. You can only have a single listener running on a single port. Therefore to be able to direct traffic for a specific web page you need to use an ALB and path-based routing to direct the traffic to a specific back-end listener. As only one protocol and one port can be defined per target group you also need to create a new target group that uses port 8080 as a target.”

193
Q

“You have been tasked with building an ECS cluster using the EC2 launch type and need to ensure container instances can connect to the cluster. A colleague informed you that you must ensure the ECS container agent is installed on your EC2 instances. You have selected to use the Amazon ECS-optimized AMI. Which of the statements below are correct? (choose 2)

  1. The Amazon ECS container agent is included in the Amazon ECS-optimized AMI
  2. The Amazon ECS container agent must be installed for all AMIs
  3. The Amazon ECS container agent is installed on the AWS managed infrastructure used for tasks using the EC2 “launch type so you don’t need to do anything
  4. You can install the ECS container agent on any Amazon EC2 instance that supports the Amazon ECS specification
  5. You can only install the ECS container agent on Linux instances”
A

1,4

“The ECS container agent allows container instances to connect to the cluster nd runs on each infrastructure resource on an ECS cluster. The ECS container agent is included in the Amazon ECS optimized AMI and can also be installed on any EC2 instance that supports the ECS specification (only supported on EC2 instances). It is available for Linux and Windows”

194
Q

What’s the difference between ‘system status checks’ and ‘instance status checks?

A

“System status checks detect (StatusCheckFailed_System) problems with your instance that require AWS involvement to repair whereas Instance status checks (StatusCheckFailed_Instance) detect problems that require your involvement to repair”

195
Q

“You work as an Enterprise Architect for Digital Cloud Training which employs 1500 people. The company is growing at around 5% per annum. The company strategy is to increasingly adopt AWS cloud services. There is an existing Microsoft Active Directory (AD) service that is used as the on-premise identity and access management system. You want to avoid synchronizing your directory into the AWS cloud or adding permissions to resources in another AD domain. How can you continue to utilize the on-premise AD for all authentication when consuming AWS cloud services?”

  1. “Install a Microsoft Active Directory Domain Controller on AWS and add it into your existing on-premise domain
  2. Launch an AWS Active Directory Service for Microsoft Active Directory and setup trust relationships with your on-premise domain
  3. Use a large AWS Simple AD in AWS
  4. Launch a large AWS Directory Service AD Connector to proxy all authentication back to your on-premise AD service for authentication”
A

4.,.. “The important points here are that you need to utilize the on-premise AD for authentication with AWS services whilst not synchronizing the AD database into the cloud or setting up trust relationships(adding permissions to resources in another AD domain). AD Connector is a directory gateway for redirecting directory requests to your on-premise Active Directory and eliminates the need for directory synchronization. AD connector is considered the best choice when you want to use an existing AD with AWS services. The small AD connector is for up to 500 users and the large version caters for up to 5000 so in this case we need to use the large AD connector”

“Active Directory Service for Microsoft Active Directory is the best choice if you have more than 5000 users and is a standalone AD service in the cloud. You can also setup trust relationships with existing on-premise AD instances (though you can’t replicate/synchronize). In this case we want to leverage the on-premise AD and want to avoid trust relationships”

196
Q

“You are a Solutions Architect for a systems integrator. Your client is growing their presence in the AWS cloud and has applications and services running in a VPC across multiple availability zones within a region. The client has a requirement to build an operational dashboard within their on-premise data center within the next few months. The dashboard will show near real time statistics and therefore must be connected over a low latency, high performance network. What would be the best solution for this requirement?

  1. Use redundant VPN connections to two VGW routers in the region, this should give you access to the infrastructure in all AZs
  2. Order multiple AWS Direct Connect connections that will be connected to multiple AZs
  3. Order a single AWS Direct Connect connection to connect to the client’s VPC. This will provide access to all AZs within the region
  4. You cannot connect to multiple AZs from a single location”
A

3… “With AWS Direct Connect you can provision a low latency, high performance private connection between the client’s data center and AWS. Direct Connect connections connect you to a region and all AZs within that region. In this case the client has a single VPC so we know their resources are container within a single region and therefore a single Direct Connect connection satisfies the requirements.”

197
Q

“The security team in your company is defining new policies for enabling security analysis, resource change tracking, and compliance auditing. They would like to gain visibility into user activity by recording API calls made within the company’s AWS account. The information that is logged must be encrypted. This requirement applies to all AWS regions in which your company has services running. How will you implement this request? (choose 2)

  1. Create a CloudTrail trail and apply it to all regions
  2. Create a CloudTrail trail in each region in which you have services
  3. Enable encryption with a single KMS key
  4. Enable encryption with multiple KMS keys
  5. Use CloudWatch to monitor API calls”
A

1,3

“CloudTrail is used for recording API calls (auditing) whereas CloudWatch is used for recording metrics (performance monitoring). The solution can be deployed with a single trail that is applied to all regions. A single KMS key can be used to encrypt log files for trails applied to all regions. CloudTrail log files are encrypted using S3 Server Side Encryption (SSE) and you can also enable encryption SSE KMS for additional security”

198
Q

“Your organization is deploying a multi-language website on the AWS Cloud. The website uses CloudFront “as the front-end and the language is specified in the HTTP request:

http://d12345678aabbcc0.cloudfront.net/main.html?language=en

  • http://d12345678aabbcc0.cloudfront.net/main.html?language=sp
  • http://d12345678aabbcc0.cloudfront.net/main.html?language=fr

You need to configure CloudFront to deliver the cached content. What method can be used?

  1. Signed URLs
  2. Query string parameters
  3. Origin Access Identity
  4. Signed Cookies”
A
  1. Query string parameters

“Query string parameters cause CloudFront to forward query strings to the origin and to cache based on the language parameter”

199
Q

What are Origin Access Identity for?

A

control access to CloudFront distributions

200
Q

“A mobile client requires data from several application-layer services to populate its user interface. What can the application team use to decouple the client interface from the underlying services behind them?”

A

“Amazon API Gateway decouples the client application from the back-end application-layer services by providing a single endpoint for API requetss”

201
Q

What is Amazon Cognito used for?

A

“Amazon Cognito is used for adding sign-up, sign-in and access control to mobile apps”

202
Q

“A new mobile application that your company is deploying will be hosted on AWS. The users of the application will use mobile devices to upload small amounts of data on a frequent basis. It is expected that the number of users connecting each day could be over 1 million. The data that is uploaded must be stored in a durable and persistent data store. The data store must also be highly available and easily scalable.Which AWS service would you use?”

A

DynamoDB

“Amazon DynamoDB is a fully managed NoSQL database service that provides a durable and persistent data store. You can scale DynamoDB using push button scaling which means that you can scale the DB at any time without incurring downtime. Amazon DynamoDB stores three geographically distributed replicas of each table to enable high availability and data durability”

“As a Solutions Architect for Digital Cloud Training you are designing an online shopping application for a new client. The application will be composed of distributed, decoupled components to ensure that the failure of a single component does not affect the availability of the application.

203
Q

You will be using SQS as the message queueing service and the client has stipulated that the messages related to customer orders must be processed in the order that they were submitted in the online application. The client expects that the peak rate of transactions will not exceed 140 transactions a second.What will you explain to the client?

  1. This is not possible with SQS as you cannot control the order in the queue
  2. The only way this can be achieved is by configuring the applications to process messages from the queue in the right order based on timestamps
  3. This can be achieved by using a FIFO queue which will guarantee the order of messages
  4. This is fine, standard SQS queues can guarantee the order of the messages”
A

3 = FIFO

“FIFO (first-in-first-out) queues preserve the exact order in which messages are sent and received.. If you use a FIFO queue, you don’t have to place sequencing information in your message and they provide exactly-once processing, which means that each message is delivered once and remains available until a consumer processes it and deletes it. A FIFO queue would fit the solution requirements for this question”

204
Q

“A company is launching a new application and expects it to be very popular. The company requires a database layer that can scale along with the application. The schema will be frequently changes and the application cannot afford any downtime for database changes.Which AWS service allows the company to achieve these requirements?

  1. Amazon Aurora
  2. Amazon RDS MySQL
  3. Amazon DynamoDB
  4. Amazon RedShift”
A

“DynamoDB a NoSQL DB which means you can change the schema easily. It’s also the only DB in the list that you can scale without any downtime

Amazon Aurora, RDS MySQL and RedShift all require changing instance sizes in order to scale which causes an outage. They are also all relational databases (SQL) so changing the schema is difficult”

205
Q

“Your company runs a two-tier application on the AWS cloud that is composed of a web front-end and an RDS database. The web front-end uses multiple EC2 instances in multiple Availability Zones (AZ) in an Auto Scaling group behind an Elastic Load Balancer. Your manager is concerned about a single point of failure in the RDS database layer. What would be the most effective approach to minimizing the risk of an AZ failure causing an outage to your database layer?

  1. Take a snapshot of the database
  2. Increase the DB instance size
  3. Create a Read Replica of the RDS DB instance in another AZ
  4. Enable Multi-AZ for the RDS DB instance”
A

“Multi-AZ RDS creates a replica in another AZ and synchronously replicates to it. This provides a DR solution as if the AZ in which the primary DB resides fails, multi-AZ will automatically fail over to the replica instance with minimal downtime”

206
Q

“Another systems administrator in your company created an Auto Scaling group that is configured to ensure that four EC2 instances are available at a minimum at all times. The settings he selected on the Auto Scaling group are a minimum group size of four instances and a maximum group size of six instances. Your colleague has asked your assistance in trying to understand if Auto Scaling will allow him to terminate instances in the Auto Scaling group and what the effect would be if it does. What advice would you give to your colleague?

  1. Auto Scaling will not allow him to terminate an EC2 instance, because there are currently four provisioned instances and the minimum is set to four
  2. He would need to reduce the minimum group size setting to be able to terminate any instances
  3. This should be allowed, and Auto Scaling will launch additional instances to compensate for the ones that were terminated
  4. This can only be done via the command line”
A

3…

“You can only apply one IAM role to a Task Definition so you must create a separate Task Definition.. A Task Definition is required to run Docker containers in Amazon ECS and you can specify the IAM role (Task Role) that the task should use for permissions

It is incorrect to say that you cannot implement granular permissions with ECS containers as IAM roles are granular and are applied through Task Definitions/Task Roles

You can apply different IAM roles to different EC2 instances, but to grant permissions to ECS application containers you must use Task Definitions and Task Roles”

207
Q

“The Perfect Forward Secrecy (PFS) security feature uses a derived session key to provide additional safeguards against the eavesdropping of encrypted data. Which two AWS services support PFS? (choose 2)

  1. EC2
  2. EBS
  3. CloudFront
  4. Auto Scaling
  5. Elastic Load Balancing”
A

Cloudfront and ELB

“CloudFront and ELB support Perfect Forward Secrecy which creates a new private key for each SSL session

Perfect Forward Secrecy (PFS) provides additional safeguards against the eavesdropping of encrypted data, through the use of a unique random session key”

208
Q

“Your client is looking for a way to use standard templates for describing and provisioning their infrastructure resources on AWS. Which AWS service can be used in this scenario?”

A

CloudFormation

“AWS CloudFormation is a service that gives developers and businesses an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion. AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment”

209
Q

“You are creating an operational dashboard in CloudWatch for a number of EC2 instances running in your VPC. Which one of the following metrics will not be available by default?

  1. Memory usage
  2. Disk read operations
  3. Network in and out
  4. CPU utilization”
A

1=memory usage

210
Q

“Your company SysOps practices involve running scripts within the Linux operating systems of your applications. Which of the following AWS services allow you to access the underlying operating system? (choose 2)

  1. Amazon RDS
  2. Amazon EMR
  3. AWS Lambda
  4. DynamoDB
  5. Amazon EC2”
A

“With EMR and EC2 you have access to the underlying operating system which means “you can connect to the operating system using protocols such as SSH and then manage the operating system”

211
Q

“A Solutions Architect is designing a front-end that accepts incoming requests for back-end business logic applications. The Architect is planning to use Amazon API Gateway, which statements are correct in relation to the service? (choose 2)

  1. API Gateway is a collection of resources and methods that are integrated with back-end HTTP endpoints, Lambda functions or other AWS services
  2. API Gateway is a web service that gives businesses and web application developers an easy and cost-effective way to distribute content with low latency and high data transfer speeds
  3. Throttling can be configured at multiple levels including Global and Service Call
  4. API Gateway uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns
  5. API Gateway is a network service that provides an alternative to using the Internet to connect customers’ on-premise sites to AWS”
A

1,3

“An Amazon API Gateway is a collection of resources and methods that are integrated with back-end HTTP endpoints, Lambda function or other AWS services. API Gateway handles all of the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls. Throttling can be configured at multiple levels including Global and Service Call”

212
Q

“You are considering the security and durability of your data that is stored in Amazon EBS volumes. Which of the statements below is true?

  1. EBS volumes are replicated within their Availability Zone (AZ) to protect you from component failure
  2. EBS volumes are replicated across AZs to protect you from loss of access to an individual AZ
  3. EBS volumes are backed by Amazon S3 which replicates data across multiple facilities within a region
  4. You can define the number of AZs to replicate your data to via the API”
A

“EBS volume data is replicated across multiple servers within an AZ”

213
Q

“Your company runs a two-tier application that uses web front-ends running on EC2 instances across multiple AZs. The back-end is an RDS multi-AZ database instance. The front-end servers host a Content Management System (CMS) application that stores files that users upload in attached EBS storage. You don’t like having the uploaded files distributed across multiple EBS volumes and are concerned that this solution is not scalable.You would like to design a solution for storing the files that are uploaded to your EC2 instances that can achieve high levels of aggregate throughput and IOPS. The solution must scale automatically, and provide consistent low latencies. You also need to be able to mount the storage to the EC2 instances across multiple AZs within the region. Which AWS service would meet your needs?”

A

EFS

“The Amazon Elastic File System (EFS) is a file-based (not block or object-based) system that is accessed using the NFSv4.1 protocol. You can concurrently connect 1 to 1000s of EC2 instances from multiple AZs to a single EFS file system. EFS is elastic and provides high levels of aggregate throughput and IOPS.”

214
Q

“You work as a Solutions Architect at Digital Cloud Training. You are working on a disaster recovery solution that allows you to bring up your applications in another AWS region. Some of your applications run on EC2 instances and have proprietary software configurations with embedded licenses. You need to create duplicate copies of your EC2 instances in the other region. What would be the best way to do this? (choose 2)

  1. Create snapshots of the EBS volumes attached to the instances
  2. Copy the snapshots to the other region and create new EC2 instances from the snapshots
  3. Create an AMI of each EC2 instance and copy the AMIs to the other region
  4. Create new EC2 instances from the snapshots
  5. Create new EC2 instances from the AMIs”
A

“In this scenario we are not looking to backup the instances but to create identical copies of them in the other region. These are often called golden images. We must assume that any data used by the instances resides in another service and will be accessible to them when they are launched in a DR situation

You launch EC2 instances using AMIs not snapshots (you can create AMIs from snapshots). Therefore, you should create AMIs of each instance (rather than snapshots), copy the AMIs between regions and then create new EC2 instances from the AMIs”

215
Q

“You would like to create a highly available web application that serves static content using multiple On-Demand EC2 instances.

Which of the following AWS services will help you to achieve this? (choose 2)

  1. Multiple Availability Zones
  2. Amazon S3 and CloudFront
  3. Elastic Load Balancer and Auto Scaling
  4. DynamoDB and ElastiCache
  5. Direct Connect”
A

1,3

“None of the answer options present the full solution. However, you have been asked which services will help you to achieve the desired outcome. In this case we need high availability for on-demand EC2 instances.

A single Auto Scaling Group will enable the on-demand instances to be launched into multiple availability zones with an elastic load balancer distributing incoming connections to the available EC2 instances. This provides high availability and elasticity

Amazon S3 and CloudFront could be used to serve static content from an S3 bucket, however the question states that the web application runs on EC2 instances”

216
Q

“A Solutions Architect requires a highly available database that can deliver an extremely low RPO. Which of the following configurations uses synchronous replication?

  1. RDS Read Replica across AWS regions
  2. DynamoDB Read Replica
  3. RDS DB instance using a Multi-AZ configuration
  4. EBS volume synchronization”
A

“A Recovery Point Objective (RPO) relates to the amount of data loss that can be allowed, in this case a low RPO means that you need to minimize the amount of data lost so synchronous replication is required. Out of the options presented only Amazon RDS in a multi-AZ configuration uses synchronous replication”

217
Q

“The development team in your company has created a new application that you plan to deploy on AWS which runs multiple components in Docker containers. You would prefer to use AWS managed infrastructure for running the containers as you do not want to manage EC2 instances. Which of the below solution options would deliver these requirements? (choose 2)

  1. Use CloudFront to deploy Docker on EC2
  2. Use the Elastic Container Service (ECS) with the EC2 Launch Type
  3. Use the Elastic Container Service (ECS) with the Fargate Launch Type
  4. Put your container images in a private repository
  5. Put your container images in the Elastic Container Registry (ECR)”
A

“If you do not want to manage EC2 instances you must use the AWS Fargate launch type which is a serverless infrastructure managed by AWS. Fargate only supports container images hosted on Elastic Container Registry (ECR) or Docker Hub”

218
Q

“You would like to host a static website for digitalcloud.training on AWS. You will be using Route 53 to direct traffic to the website. Which of the below steps would help you achieve your objectives? (choose 2)”

  1. “Create an S3 bucket named digitalcloud.training
  2. Use any existing S3 bucket that has public read access enabled
  3. Create an “SRV” record that points to the S3 bucket
  4. Create a “CNAME” record that points to the S3 bucket
  5. Create an “Alias” record that points to the S3 bucket”
A

1,5 = “S3 can be used to host static websites and you can use a custom domain name with S3 using a Route 53 Alias record. When using a custom domain name the bucket name must be the same as the domain name

The Alias record is a Route 53 specific record type. Alias records are used to map resource record sets in your hosted zone to Amazon Elastic Load Balancing load balancers, Amazon CloudFront distributions, AWS Elastic Beanstalk environments, or Amazon S3 buckets that are configured as websites”

219
Q

“A customer has a production application running on Amazon EC2. The application frequently overwrites and deletes data, and it is essential that the application receives the most up-to-date version of the data whenever it is requested. Which storage service is most appropriate for these requirements?”

  1. “Amazon RedShift
  2. Amazon S3
  3. AWS Storage Gateway
  4. Amazon RDS”
A

“This scenario asks that when retrieving data the chosen storage solution should always return the most up-to-date data. Therefore we must use Amazon RDS as it provides read-after-write consistency”

220
Q

“You are a Solutions Architect at Digital Cloud Training. Your client’s company is growing and now has over 10,000 users. The client would like to start deploying services into the AWS Cloud including AWS Workspaces. The client expects there to be a large take-up of AWS services across their user base and would like to use their existing Microsoft Active Directory identity source for authentication. The client does not want to replicate account credentials into the AWS cloud. You have been tasked with designing the identity, authorization and access solution for the customer. Which AWS services will you include in your design? (choose 2)”

  1. “Use the Enterprise Edition of AWS Directory Service for Microsoft Active Directory
  2. Use a Large AWS Simple AD
  3. Use a Large AWS AD Connector
  4. Setup trust relationships to extend authentication from the on-premises Microsoft Active Directory into the AWS cloud
  5. Use an AWS Cognito user pool
A

1,4 = “The customer wants to leverage their existing directory but not replicate account credentials into the cloud. Therefore they can use the Active Directory Service for Microsoft Active Directory and create a trust relationship with their existing AD domain. This will allow them to authenticate using local user accounts in their existing directory without creating an AD Domain Controller in the cloud (which would entail replicating account credentials)

Active Directory Service for Microsoft Active Directory is the best choice if you have more than 5000 users and/or need a trust relationship set up”

221
Q

“A Solutions Architect is developing a new web application on AWS that needs to be able to scale to support unpredictable workloads. The Architect prefers to focus on value-add activities such as software development and product roadmap development rather than provisioning and managing instances. Which solution is most appropriate for this use case?”

  1. “Amazon API Gateway and Amazon EC2
  2. Amazon API Gateway and AWS Lambda
  3. Elastic Load Balancing with Auto Scaling groups and Amazon EC2
  4. Amazon CloudFront and AWS Lambda”
A

“The Architect requires a solution that removes the need to manage instances. Therefore it must be a serverless service which rules out EC2. The two remaining options use AWS Lambda at the back-end for processing. Though CloudFront can trigger Lambda functions it is more suited to customizing content delivered from an origin. Therefore API Gateway with AWS Lambda is the most workable solution presented”

222
Q

“A company is planning moving their DNS records to AWS as part of a major migration to the cloud. Which statements are true about Amazon Route 53? (choose 2)

  1. You can transfer domains to Route 53 even if the Top-Level Domain (TLD) is unsupported
  2. You cannot automatically register EC2 instances with private hosted zones
  3. You can automatically register EC2 instances with private hosted zones
  4. Route 53 can be used to route Internet traffic for domains registered with another domain registrar”
A

2,4

“You cannot automatically register EC2 instances with private hosted zones

Route 53 can be used to route Internet traffic for domains registered with another domain registrar (any domain)

You can transfer domains to Route 53 only if the Top Level Domain (TLD) is supported”

223
Q

“Your manager has asked you to explain how Amazon ElastiCache may assist with the company’s plans to improve the performance of database queries. Which of the below statements is a valid description of the benefits of Amazon ElastiCache? (choose 2)

  1. ElastiCache is best suited for scenarios where the data base load type is OLTP
  2. ElastiCache nodes can be accessed directly from the Internet and EC2 instances in other regions, which allows you to improve response times for queries over long distances
  3. ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud
  4. ElastiCache can form clusters using a mixture of Memcached and Redis caching engines, allowing you to take advantage of the best features of each caching engine
  5. The in-memory caching provided by ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads”
A

3,5 =

“The in-memory caching provided by ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads

ElastiCache is best for scenarios where the DB load is based on Online Analytics Processing (OLAP) transactions not Online Transaction Processing (OLTP)

Elasticache EC2 nodes cannot be accessed from the Internet, nor can they be accessed by EC2 instances in other VPCs

You cannot mix Memcached and Redis in a cluster”

224
Q

“You created a new Auto Scaling Group (ASG) with two subnets across AZ1 and AZ2 in your VPC. You set the minimum size to 6 instances. After creating the ASG you noticed that all EC2 instances were launched in AZ1 due to limited capacity of the required instance family within AZ2. You’re concerned about the imbalance of resources. What would be the expected behavior of Auto Scaling once the capacity constraints are resolved in AZ2?

  1. The ASG will launch three additional EC2 instances in AZ2 and keep the six in AZ1
  2. The ASG will try to rebalance by first creating three new instances in AZ2 and then terminating three instances in AZ1
  3. The ASG will launch six additional EC2 instances in AZ2
  4. The ASG will not do anything until the next scaling event”
A

2… “Auto Scaling rebalances by launching new EC2 instances in the AZs that have fewer instances first, only then will it start terminating instances in AZs that had more instances”

225
Q

“A Solutions Architect is designing a shared service for hosting containers from several customers on Amazon ECS. These containers will use several AWS services. A container from one customer must not be able to access data from another customer. Which solution should the Architect use to meet the requirements?

  1. IAM roles for tasks
  2. IAM roles for EC2 instances
  3. IAM Instance Profile for EC2 instances
  4. Network ACL”
A

1..“IAM roles for ECS tasks enabled you to secure your infrastructure by assigning an IAM role directly to the ECS task rather than to the EC2 container instance. This means you can have one task that uses a specific IAM role for access to S3 and one task that uses an IAM role to access DynamoDB

With IAM roles for EC2 instances you assign all of the IAM policies required by tasks in the cluster to the EC2 instances that host the cluster. This does not allow the secure separation requested”

226
Q

“As a Solutions Architect at Digital Cloud Training you are helping a client to design a multi-tier web application architecture. The client has requested that the architecture provide low-latency connectivity between all servers and be resilient across multiple locations. They would also like to use their existing Microsoft SQL licenses for the database tier. The client needs to maintain the ability to access the operating systems of all servers for the installation of monitoring software. How would you recommend the database tier is deployed?

  1. Amazon EC2 instances with Microsoft SQL Server and data replication within an AZ
  2. Amazon EC2 instances with Microsoft SQL Server and data replication between two different AZs
  3. Amazon RDS with Microsoft SQL Server
  4. Amazon RDS with Microsoft SQL Server in a Multi-AZ configuration”
A

2.. “As the client needs to access the operating system of the database servers, we need to use EC2 instances not RDS (which does not allow operating system access). We can implement EC2 instances with Microsoft SQL in two different AZs which provides the requested location redundancy and AZs are connected by low-latency, high throughput and redundant networking”

227
Q

“You have been asked to review the security posture of your EC2 instances in AWS. When reviewing “security groups, which rule types do you need to inspect? (choose 2)

  1. Inbound
  2. Deny
  3. Outbound
  4. Stateless
  5. Stateful”
A

“Security Groups can be configured with Inbound (ingress) and Outbound (egress) rules. You can only assign permit rules in a security group,”

228
Q

“Your client needs to find the easiest way to load streaming data into data stores and analytics tools. The data will be captured, transformed, and loaded into Splunk. The transformation will be performed by a Lambda function so the service must support this integration. The client has also requested that a backup of the data is saved into an S3 bucket along with logging data. Which AWS service would the client be able to use to achieve these requirements?

  1. Kinesis Data Firehose
  2. Kinesis Data Analytics
  3. Redshift
  4. Kinesis Data Streams”
A

1…Kinesis Firehose is the easiest way to load streaming data stores and “analytics tools. It captures, transforms, and loads streaming data and can invoke a Lambda function to transform data before delivering it to destinations. Firehose Destinations include: S3, RedShift, Elasticsearch and Splunk”

229
Q

What can Kinesis Data Stream do?

A

“Kinesis Data Streams processes data and then stores it for applications to access. It does not deliver it to destinations such as Splunk”

230
Q

What can Kinesis Analytics do?

A

“Kinesis Data Analytics is used for processing and analyzing real-time streaming data. It can only output data to S3, RedShift, Elasticsearch and Kinesis Data Streams”

231
Q

“You are a Solutions Architect at Digital Cloud Training. A client of yours is using API Gateway for accepting and processing a large number of API calls to AWS Lambda. The client’s business is rapidly growing and he is therefore expecting a large increase in traffic to his API Gateway and AWS Lambda services. The client has asked for advice on ensuring the services can scale without any reduction in performance. What advice would you give to the client? (choose 2)

  1. API Gateway scales up to the default throttling limit, with some additional burst capacity available
  2. API Gateway scales manually through the assignment of provisioned throughput
  3. API Gateway can only scale up to the fixed throttling limits
  4. AWS Lambda automatically scales up by using larger instance sizes for your functions
  5. AWS Lambda scales concurrently executing functions up to your default limit”
A

1,5… “API Gateway can scale to any level of traffic received by an API. API Gateway scales up to the default throttling limit of 10,000 requests per second, and can burst past that up to 5,000 RPS. Throttling is used to protect back-end instances from traffic spikes

Lambda uses continuous scaling – scales out not up. Lambda scales concurrently executing functions up to your default limit (1000)”

232
Q

“An application that you will be deploying in your VPC requires 14 EC2 instances that must be placed on distinct underlying hardware to reduce the impact of the failure of a hardware node. The instances will use varying instance types. What configuration will cater to these requirements taking cost-effectiveness into account?

  1. Use a Cluster Placement Group within a single AZ
  2. Use a Spread Placement Group across two AZs
  3. Use dedicated hosts and deploy each instance on a dedicated host
  4. You cannot control which nodes your instances are placed on”
A

2…“A spread placement group is a group of instances that are each placed on distinct underlying hardware”

233
Q

“You have launched a Spot instance on EC2 for working on an application development project. In the event of an interruption what are the possible behaviors that can be configured? (choose 2)

  1. Restart
  2. Hibernate
  3. Stop
  4. Save
  5. Pause”
A

Hibernate and Stop

“You can specify whether Amazon EC2 should hibernate, stop, or terminate Spot Instances when they are interrupted. You can choose the interruption behavior that meets your needs. The default is to terminate Spot Instances when they are interrupted”

234
Q

“A developer is creating a solution for a real-time bidding application for a large retail company that allows users to bid on items of end-of-season clothing. The application is expected to be extremely popular and the back-end DynamoDB database may not perform as required.How can the Solutions Architect enable in-memory read performance with microsecond response times for the DynamoDB database?

  1. Configure DynamoDB Auto Scaling
  2. Enable read replicas
  3. Increase the provisioned throughput
  4. Configure Amazon DAX”
A
  1. Configure Amazon DAX

“Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second. You can enable DAX for a DynamoDB database with a few clicks”

235
Q

“You are running a Hadoop cluster on EC2 instances in your VPC. The EC2 instances are launched by an Auto Scaling Group (ASG) and you have configured the ASG to scale out and in as demand changes. One of the instances in the group is the Hadoop Master Node and you need to ensure that it is not terminated when your ASG processes a scale in action.What is the best way this can be achieved without interrupting services?

  1. Use the Instance Protection feature to set scale in protection for the Hadoop Master Node
  2. Move the Hadoop Master Node to another ASG that has the minimum and maximum instance settings set to 1
  3. Enable Deletion Protection for the EC2 instance
  4. Change the DeleteOnTermination value for the EC2 instance”
A

1..“You can enable Instance Protection to protect a specific instance in an ASG from a scale in action”

236
Q

“A company is moving a large amount of sensitive data to the cloud. Data will be moved to Amazon S3 and the Solutions Architects are concerned about encryption and management of keys. Which of the statements below is correct regarding the SSE-KMS option? (choose 2)

  1. KMS uses customer master keys (CMKs)
  2. KMS uses customer provided keys (CPKs)
  3. Keys are managed through Amazon S3
  4. Auditable master keys can be created, rotated, and disabled from the IAM console
  5. Data is encrypted by default on the client side and then transferred in an encrypted state
A

Answer: 1,4”

“You can use server-side encryption with SSE-KMS to protect your data with a master key or you can use an AWS KMS customer master key

KMS uses customer master keys (CMKs), not customer provided keys

SSE-KMS requires that AWS manage the data key but you manage the master key in AWS KMS

Auditable master keys can be created, rotated, and disabled from the IAM console

You can use the Amazon S3 encryption client in the AWS SDK from your own application to encrypt objects and upload them to Amazon S3, otherwise data is encrypted on Amazon S3, not on the client side”

237
Q

“You have taken a snapshot of an encrypted EBS volume and would like to share the snapshot with another AWS account. Which statements are true about sharing snapshots of encrypted EBS volumes? (choose 2)

  1. Snapshots of encrypted volumes are unencrypted
  2. You must obtain an encryption key from the target AWS account for encrypting the snapshot
  3. A custom CMK key must be used for encryption if you want to share the snapshot
  4. You must share the CMK key as well as the snapshot with the other AWS account
  5. You must store the CMK key in CloudHSM and delegate access to the other AWS account”
A

3,4

“A custom CMK key must be used for encryption if you want to share the snapshot

You must share the CMK key as well as the snapshot with the other AWS account

Snapshots of encrypted volumes are encrypted automatically

To share an encrypted snapshot you must encrypt it in the source account with a custom CMK key and then share the key with the target account”

238
Q

“A colleague recently deployed a two-tier web application into a subnet using a test account. The subnet has an IP address block of 10.0.5.0/27 and he launched an Auto Scaling Group (ASG) with a desired capacity of 8 web servers. Another ASG has 6 application servers and two database servers and both ASGs are behind a single ALB with multiple target groups. All instances are On-Demand instances. Your colleague attempted to test a simulated increase in capacity requirements of 50% and not all instances were able to launch successfully. What would be the best explanations for the failure to launch the extra instances? (choose 2)

  1. The ASG is waiting for the health check “grace period to expire, it might have been set at a high value
  2. AWS impose a soft limit of 20 instances per region for an account, you have exceeded this number
  3. There are insufficient IP addresses in the subnet range to allow for the EC2 instances, the AWS reserved addresses, and the ELB IP address requirements
  4. The IP address block overlaps with another subnet in the VPC
  5. There are insufficient resources available in the Availability Zone”
A

2,3

“The relevant facts are there is a soft limit of 20 On-demand or 20 reserved instances per region by default and there are 32 possible hosts in a /27 subnet. AWS reserve the first 4 and last 1 IP address. ELB requires 8 addresses within your subnet which only leaves 19 addresses available for use

There are 16 EC2 instances so a capacity increase of 50% would bring the total up to 24 instances which exceeds the address space and the default account limit for On-Demand instances”

239
Q

“A Solutions Architect is creating a new VPC and is creating a security group and network ACL design. Which of the statements below are true regarding network ACLs? (choose 2)

  1. Network ACLs operate at the instance level
  2. With Network ACLs you can only create allow rules
  3. Network ACLs contain a numbered list of rules that are evaluated in order from the “lowest number until the explicit deny
  4. With Network ACLs all rules are evaluated until a permit is encountered or continues until the implicit deny
  5. Network ACLs only apply to traffic that is ingress or egress to the subnet not to traffic within the subnet”
A

3,5

“Network ACLs contain a numbered list of rules that are evaluated in order from the lowest number until the explicit deny. Network ACLs only apply to traffic that is ingress or egress to the subnet not to traffic within the subnet

Network ACL’s function at the subnet level, not the instance level

With NACLs you can have permit and deny rules

All rules are not evaluated before making a decision (security groups do this), they are evaluated in order until a permit or deny is encountered”

240
Q

“An EBS-backed EC2 instance has been configured with some proprietary software that uses an embedded license. You need to move the EC2 instance to another Availability Zone (AZ) within the region. How can this be accomplished? Choose the best answer.”

  1. “Take a snapshot of the instance. Create a new EC2 instance and perform a restore from the snapshot
  2. Create an image from the instance. Launch an instance from the AMI in the destination AZ
  3. Use the AWS Management Console to select a different AZ for the existing instance
  4. Perform a copy operation to move the EC2 instance to the destination AZ”
A

“The easiest and recommended option is to create an AMI (image) from the instance and launch an instance from the AMI in the other AZ. AMIs are backed by snapshots which in turn are backed by S3 so the data is available from any AZ within the region

You can take a snapshot, launch an instance in the destination AZ. Stop the instance, detach its root volume, create a volume from the snapshot you took and attach it to the instance. However, this is not the best option

There’s no way to move an EC2 instance from the management console

You cannot perform a copy operation to move the instance”

241
Q

“A member of the security team in your organization has brought an issue to your attention. External monitoring tools have noticed some suspicious traffic coming from a small number of identified public IP addresses. The traffic is destined for multiple resources in your VPC. What would be the easiest way to temporarily block traffic from the IP addresses to any resources in your VPC?”

A

“Add a rule to the Network ACL to deny traffic from the identified IP addresses. Ensure all subnets are associated with the Network ACL”

the best way to handle this situation is to create a deny rule in a network ACL using the identified IP addresses as the source. You would apply the network ACL to the subnet(s) that are seeing suspicious traffic”

242
Q

“You are a Solutions Architect at Digital Cloud Training. One of your clients is expanding their operations into multiple AWS regions around the world. The client has requested some advice on how to leverage their existing AWS Identity and Access Management (IAM) configuration in other AWS regions. What advice would you give to your client?

  1. IAM is a global service and the client can use users, groups, roles, and policies in any AWS region
  2. IAM is a regional service and the client will need to copy the configuration items required across to other AWS regions
  3. The client will need to create a VPC peering configuration with each remote AWS region and then allow IAM access across region”
A

1… IAM is global

243
Q

“A Solutions Architect has been asked to improve the performance of a DynamoDB table. Latency is currently a few milliseconds and this needs to be reduced to microseconds whilst also scaling to millions of requests per second.

What is the BEST architecture to support this?”

A

create a DynamoDB Accelerator (DAX) cluster

244
Q

“A company is moving to a hybrid cloud model and will be setting up private links between all cloud data centers. An Architect needs to determine the connectivity options available when using AWS Direct Connect and public and private VIFs?Which options are available to the Architect (choose 2)

  1. You can connect to AWS services over the private VIF
  2. You can connect to your private VPC subnets over the public VIF
  3. You can connect to your private VPC subnets over the private VIF, and to Public AWS services over the public VIF
  4. You can substitute your Internet connection at your DC with AWS’s public Internet “through the use of a NAT gateway in your VPC
  5. Once connected to your VPC through Direct connect you can connect to all AZs within the region”
A

“Each AWS Direct Connect connection can be configured with one or more virtual interfaces (VIFs). Public VIFs allow access to public services such as S3, EC2, and DynamoDB. Private VIFs allow access to your VPC. You must use public IP addresses on public VIFs, and private IP addresses on private VIFs

Once you have connected to an AWS region using AWS Direct Connect you can connect to all AZs within that region. You can also establish IPSec connections over public VIFs to remote regions.”

245
Q

“A Solutions Architect is designing a workload that requires a high performance object-based storage system that must be shared with multiple Amazon EC2 instances.Which AWS service delivers these requirements?”

A

S3…

is object-based storage + allow concurrent access = can be shared by multiple instances

246
Q

“You would like to deploy an EC2 instance with enhanced networking. What are the pre-requisites for using enhanced networking? (choose 2)

  1. Instances must be launched from a HVM AMI
  2. Instances must be launched from a PV AMI
  3. Instances must be launched in a VPC
  4. Instances must be EBS backed, not Instance-store backed
  5. Instances must be of T2 Micro type”
A

1,3

“AWS currently supports enhanced networking capabilities using SR-IOV which provides direct access to network adapters, provides higher performance (packets-per-second) and lower latency. You must launch an HVM AMI with the appropriate drivers and it is only available for certain instance types and only supported in VPC”

247
Q

“You have been asked to take a snapshot of a non-root EBS volume that contains sensitive corporate data. You need to ensure you can capture all data that has “been written to your Amazon EBS volume at the time the snapshot command is issued and are unable to pause any file writes to the volume long enough to take a snapshot. What is the best way to take a consistent snapshot whilst minimizing application downtime?

  1. Take the snapshot while the EBS volume is attached and the instance is running
  2. Un-mount the EBS volume, take the snapshot, then re-mount it again
  3. Stop the instance and take the snapshot
  4. You can’t take a snapshot for a non-root EBS volume”
A

2… “The key facts here are that whilst minimizing application downtime you need to take a consistent snapshot and are unable to pause writes long enough to do so. Therefore the best option is to unmount the EBS volume and take the snapshot. This will be much faster than shutting down the instance, taking the snapshot, and then starting it back up again

Snapshots capture a point-in-time state of an instance and are stored on S3. To take a consistent snapshot writes must be stopped (paused) until the snapshot is complete – if not possible the volume needs to be detached, or if it’s an EBS root volume the instance must be stopped”

(if taking snapshot while attached you may not get a fully consistent snapshot which is needed here)

248
Q

“Which two types of security policies are supported by the Elastic Load Balancer for SSL negotiations between the ELB and clients? (choose 2)

  1. Custom security policies
  2. ELB predefined Security policies
  3. Security groups
  4. Network ACLs
  5. AES 256”
A

“AWS recommends that you always use the default predefined security policy. When choosing a custom security policy you can select the ciphers and protocols (only for CLB)”

(AES is an encryption policy, not a policy)

249
Q

“You have been asked to design a cloud-native application architecture using AWS services. What is a typical use case for SQS?

  1. Decoupling application components to ensure that there is no dependency on the availability of a single component
  2. Providing fault tolerance for S3
  3. Co-ordination of work items between different human and non-human workers
  4. Sending emails to clients when a job is completed”
A

1…decoupling

250
Q

“A critical database runs in your VPC for which availability is a concern. Which RDS DB instance events may force the DB to be taken offline during a maintenance window?

  1. Selecting the Multi-AZ feature
  2. Promoting a Read Replica
  3. Security patching
  4. Updating DB parameter groups”
A

3…“Maintenance windows are configured to allow DB instance modifications to take place such as scaling and software patching. Some operations require the DB instance to be taken offline briefly and this includes security patching”

(the others do not take place during a maintenance window)

251
Q

“You are working on a database migration plan from an on-premise data center that includes a variety of databases that are being used for diverse purposes. You are trying to map each database to the correct service in AWS. Which of the below use cases are a good fit for DynamoDB (choose 2)

  1. Complex queries and joins
  2. Large amounts of dynamic data that require very low latency
  3. Migration from a Microsoft SQL relational database
  4. Rapid ingestion of clickstream data
  5. Backup for on-premises Oracle DB”
A

2,4

“Amazon Dynamo DB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability that provides low read and write latency. Because of its performance profile and the fact that it is a NoSQL type of database, DynamoDB is good for rapidly ingesting clickstream data

You should use a relational database such as RDS when you need to do complex queries and joins. Microsoft SQL and Oracle DB are both relational databases so DynamoDB is not a good backup target or migration destination for these types of DB”

252
Q

“For which of the following workloads should a Solutions Architect consider using Elastic Beanstalk? (choose 2)”

  1. “A web application using Amazon RDS
  2. A data lake
  3. A long running worker process
  4. Caching content for Internet-based delivery
  5. A management task run occasionally”
A

1,3

both a web application using Amazon RDS and a long running worker process (where it manages an SQS queue) are good examples of when yo use Beanstalk as multiple services are being used which is what Beanstalk is for = orchestration engine

253
Q

“You work for Digital Cloud Training and have just created a number of IAM users in your AWS account. You need to ensure that the users are able to make API calls to AWS services. What else needs to be done?

  1. Set a password for each user
  2. Create a set of Access Keys for the users
  3. Enable Multi-Factor Authentication for the users
  4. Create a group and add the users to it”
A

2… Access keys are a combination of an access Key ID and a secret access key and you can assign “two active access keys to a user at a time. These can be used to make programmatic calls to AWS when using the API in program code or at a command prompt when using the AWS CLI or the AWS PowerShell tools”

(a password is needed for logging into the console yes, but a password is not needed to make API calls)

254
Q

“A new security mandate requires that all personnel data held in the cloud is encrypted at rest. What two methods would allow you to encrypt data stored in S3 buckets at rest (choose 2)

  1. Use AWS S3 server-side encryption with Key Management Service keys or Customer-provided keys
  2. Encrypt the data at the source using the client’s CMK keys before transferring it to S3
  3. Make use of AWS S3 bucket policies to control access to the data at rest
  4. Use Multipart upload with SSL
  5. Use CloudHSM”
A

1,2..

“When using S3 encryption your data is always encrypted at rest and you can choose to use KMS managed keys or customer-provided keys. If you encrypt the data at the source and transfer it in an encrypted state it will also be encrypted in-transit

With client side encryption data is encrypted on the client side and transferred in an encrypted state and with server-side encryption data is encrypted by S3 before it is written to disk (data is decrypted when it is downloaded)”

255
Q

What is CloudHSM for?

A

Cloud HardwareSecurityModule is used to create and manage encryption keys but not actually encrypting the data

256
Q

“You have been asked to deploy a new High-Performance Computing (HPC) cluster. You need to create a design for the EC2 instances that ensures close proximity, low latency and high network throughput. Which AWS features will help you to achieve this requirement whilst considering cost? (choose 2)

  1. Launch I/O Optimized EC2 instances in one private subnet in an AZ
  2. Use dedicated hosts
  3. Use EC2 instances with Enhanced Networking
  4. Use Provisioned IOPS EBS volumes
  5. Use Placement groups”
A

3,5…“Placement groups are recommended for applications that benefit from low latency and high bandwidth and it s recommended to use an instance type that supports enhanced networking. Instances within a placement group can communicate with each other ”

257
Q

“A Solutions Architect is developing an application that will store and index large (>1 MB) JSON files. The data store must be highly available and latency must be consistently low even during times of heavy usage. Which service should the Architect use?

  1. Amazon EFS
  2. Amazon RedShift
  3. DynamoDB
  4. AWS CloudFormation”
A

“EFS provides a highly-available data store with consistent low latencies and elasticity to scale as required”

(DynamoDB is a low latency, highly-available NoSQL Database and you can also store JSON files but only up to 400 kb in size and thus EFS is the only plausible solution here)

258
Q

“Which service uses a simple text file to model and provision infrastructure resources, in an automated and secure manner?

A

CloudFormation

259
Q

“An Architect is designing a serverless application that will accept images uploaded by users from around the world. The application will make API calls to back-end services and save the session state data of the user to a database. Which combination of services would provide a solution that is cost-effective while delivering the least latency?

  1. Amazon CloudFront, API Gateway, Amazon S3, AWS Lambda, DynamoDB
  2. API Gateway, Amazon S3, AWS Lambda, DynamoDB
  3. Amazon CloudFront, API Gateway, Amazon S3, AWS Lambda, Amazon RDS
  4. Amazon S3, API Gateway, AWS Lambda, Amazon RDS”
A

1…why?

  • CloudFront caches near end users and ensure low latency
  • API Gateway and API Lambda are present in all options
  • DynamoDB can be used for storing session state data
    • (RDS is not a serverless service so can be ruled out)
260
Q

“An EC2 instance in an Auto Scaling group that has been reported as unhealthy has been marked for replacement. What is the process Auto Scaling uses to replace the instance? (choose 2)

  1. Auto Scaling will send a notification to the administrator
  2. If connection draining is enabled, Auto Scaling will wait for in-flight connections to complete or timeout
  3. Auto Scaling has to launch a replacement first before it can terminate the unhealthy instance
  4. Auto Scaling will terminate the existing instance before launching a replacement instance
  5. Auto Scaling has to perform rebalancing first, and then terminate the instance”
A

2,4… “If connection draining is enabled, Auto Scaling waits for in-flight requests to complete or timeout before terminating instances. Auto Scaling will terminate the existing instance before launching a replacement instance”

(the process is different for AZ rebalancing where it launches new instances first in the other AZ before it closes instances down).

261
Q

“An application architect has requested some assistance with selecting a database for a new data warehouse requirement. The database must provide high performance and scalability. The data will be structured and persistent and the DB must support complex queries using SQL and BI tools.Which AWS service will you recommend?

  1. DynamoDB
  2. RDS
  3. ElastiCache
  4. Redshift”
A

Redshift = Data warehouse

“Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and existing Business Intelligence (BI) tools. RedShift is a SQL based data warehouse that is used for analytics applications. RedShift is 10x faster than a traditional SQL DB”

262
Q

“A Solutions Architect is designing a solution to store and archive corporate documents, and has determined that Amazon Glacier is the right solution. Data must be delivered within 10 minutes of a retrieval request. Which features in Amazon Glacier can help meet this requirement?”

A

Expedited Retrieval… typically 1-5 minutes (standard retrieval is 3-5hours, bulk retrieval is more cost-effective but 5-12 hours))

263
Q

“Your Business Intelligence team use SQL tools to analyze data. What would be the best solution for performing queries on structured data that is being received at a high velocity?

  1. EMR using Hive
  2. Kinesis Firehose with RDS
  3. EMR running Apache Spark
  4. Kinesis Firehose with RedShift”
A

4…Kinesis Firehose with Redshift

“Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. Firehose Destinations include: Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk

Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and existing Business Intelligence tools.

264
Q

“A Solutions Architect is designing a solution for a financial application that will receive trading data in large volumes. What is the best solution for ingesting and processing a very large number of data streams in near real time?

  1. EMR
  2. Kinesis Firehose
  3. Redshift
  4. Kinesis Data Streams”
A

“Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. It enables real-time processing of streaming big data and can be used for rapidly moving data off data producers and then continuously processing the data. Kinesis Data Streams stores data for later processing by applications (key difference with Firehose which delivers data directly to AWS services)

(Kinesis Firehose can allow transformation of data and it then delivers data to supported services”)

265
Q

What does the SSH protocol use?

A

“The SSH protocol uses TCP port 22

266
Q

“You work as a System Administrator at Digital Cloud Training and your manager has asked you to investigate an EC2 web server hosting videos that is constantly running at over 80% CPU utilization. Which of the approaches below would you recommend to fix the issue?

  1. Create an Elastic Load Balancer and register the EC2 instance to it
  2. Create a CloudFront distribution and configure the Amazon EC2 instance as the origin
  3. Create an Auto Scaling group from the instance using the CreateAutoScalingGroup action
  4. Create a Launch Configuration from the instance using the CreateLaunchConfiguration action”
A

2…“Using the CloudFront content delivery network (CDN) would offload the processing from the EC2 instance as the videos would be cached and accessed without hitting the EC2 instance”

(“Using CloudFront is preferable to using an Auto Scaling group to launch more instances as it is designed for caching content and would provide the best user experience”)

267
Q

“You are deploying an application on Amazon EC2 that must call AWS APIs. Which method of securely passing credentials to the application should you use?

  1. Store the API credentials on the instance using instance metadata
  2. Store API credentials as an object in Amazon S3
  3. Assign IAM roles to the EC2 instances
  4. Embed the API credentials into your application files”
A

3… “Always use IAM roles when you can

It is an AWS best practice not to store API credentials within applications, on file systems or on instances (such as in metadata).”

268
Q

“A Solutions Architect is migrating a small relational database into AWS. The database will run on an EC2 instance and the DB size is around 500 GB. The database is infrequently used with small amounts of requests spread across the day. The DB is a low priority and the Architect needs to lower the cost of the solution. What is the MOST cost-effective storage type?

  1. Amazon EBS Provisioned IOPS SSD
  2. Amazon EBS Throughput Optimized HDD
  3. Amazon EBS General Purpose SSD
  4. Amazon EFS”
A

2… “Throughput Optimized HDD is the most cost-effective storage option and for a small DB with low traffic volumes it may be sufficient. Note that the volume must be at least 500 GB in size

(AWS recommend using General Purpose SSD rather than Throughput Optimized HDD for most use cases but it is more expensive”)

269
Q

“A company is migrating an on-premises 10 TB MySQL database to AWS. The company expects the database to quadruple in size and the business “requirement is that replicate lag must be kept under 100 milliseconds. Which Amazon RDS engine meets these requirements?

  1. MySQL
  2. Microsoft SQL Server
  3. Oracle
  4. Amazon Aurora”
A

4…“Aurora databases can scale up to 64 TB and Aurora replicas features millisecond latency

All other RDS engines have a limit of 16 TiB maximum DB size and asynchronous replication typically takes seconds”

270
Q

“A systems integration company that helps customers migrate into AWS repeatedly build large, standardized architectures using several AWS services. The Solutions Architects have documented the architectural blueprints for these solutions and are looking for a method of automating the provisioning of the resources. Which AWS service would satisfy this requirement?”

A

CloudFormation

“CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts”

271
Q

What is AWS CodeDeploy?

A

a deployment service that “automates application deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda functions”

272
Q

“You need to provide AWS Management Console access to a team of new application developers. The team members who perform the same role are assigned to a Microsoft Active Directory group and you have been asked to use Identity Federation and RBAC. Which AWS services would you use to configure this access? (choose 2)

  1. AWS Directory Service Simple AD
  2. AWS Directory Service AD Connector
  3. AWS IAM Groups
  4. AWS IAM Roles
  5. AWS IAM Users”
A

2,4…

“AD Connector is a directory gateway for redirecting directory requests to your on-premise Active Directory. AD Connector eliminates the need for directory synchronization and the cost and complexity of hosting a federation infrastructure and connects your existing on-premise AD to AWS. It is the best choice when you want to use an existing Active Directory with AWS services

IAM Roles are created and then “assumed” by trusted entities and define a set of permissions for making AWS service requests. With IAM Roles you can delegate permissions to resources for users and services without using permanent credentials (e.g. user name and password)”.

you map the groups in AD to IAM Roles (not IAM users or groups)

273
Q

What is AD Connector?

A

“AD Connector is a directory gateway for redirecting directory requests to your on-premise Active Directory. AD Connector eliminates the need for directory synchronization and the cost and complexity of hosting a federation infrastructure and connects your existing on-premise AD to AWS. It is the best choice when you want to use an existing Active Directory with AWS services

274
Q

What is Simple AD?

A

“AWS Directory Service Simple AD is an inexpensive Active Directory-compatible service with common directory features. It is a fully cloud-based solution and does not integrate with an on-premises Active Directory service.

275
Q

“You are a Solutions Architect at Digital Cloud Training. A client from the agricultural sector has approached you for some advice around the collection of a large volume of data from sensors they have deployed around the country. An application will collect data from over 100,000 sensors and each sensor will send around 1KB of data every minute. The data needs to be stored in a durable, low latency data store. The client also needs historical data that is over 1 year old to be moved into a data warehouse where they can perform analytics using standard SQL queries. What combination of AWS services would you recommend to the client? (choose 2)

  1. Kinesis Data Streams for data ingestion
  2. EMR for analytics
  3. DynamoDB for data ingestion
  4. Elasticache for analytics
  5. RedShift for the analytics
A

3,5…

“The key requirements are that historical data that data is recorded in a low latency, durable data store and then moved into a data warehouse when the data is over 1 year old for historical analytics. This is a good use case for DynamoDB as a data store and RedShift as a data warehouse. Kinesis is used for real-time data, not historical data so is not a good fit”

276
Q

What is Kinesis good for in general?

A

“Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information.

277
Q

“The development team at your company have created a new mobile application that will be used by users to access confidential data. The developers have used Amazon Cognito for authentication, authorization, and user management. Due to the sensitivity of the data, there is a requirement to add another method of authentication in addition to a username and password. You have been asked to recommend the best solution. What is your recommendation?

  1. Integrate IAM with a user pool in Cognito
  2. Enable multi-factor authentication (MFA) in IAM
  3. Integrate a third-party identity provider (IdP)
  4. Use multi-factor authentication (MFA) with a Cognito user pool”
A

“You can use MFA with a Cognito user pool (not in IAM) and this satisfies the requirement.

A user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito. Your users can also sign in through social identity providers like Facebook or Amazon, and through SAML identity providers”

278
Q

“A company runs a multi-tier application in an Amazon VPC. The application has an ELB Classic Load Balancer as the front end in a public subnet, and an Amazon EC2-based reverse proxy that performs content-based routing to two back end EC2 instances in a private subnet. The application is experiencing increasing load and the Solutions Architect is concerned that the reverse proxy and current back end setup will be insufficient. Which actions should the Architect take to achieve a cost-effective solution that ensures the application automatically scales to meet the demand? (choose 2)

  1. Replace the Amazon EC2 reverse proxy with an ELB internal Classic Load Balancer
  2. Add Auto Scaling to the Amazon EC2 back end fleet
  3. Add Auto Scaling to the Amazon EC2 reverse proxy layer
  4. Use t3 burstable instance types for the back end fleet
  5. Replace both the front end and reverse proxy layers with an Application Load Balancer”
A

2,5.

“Due to the reverse proxy being a bottleneck to scalability, we need to replace it with a solution that can perform content-based routing. This means we must use an ALB not a CLB as ALBs support path-based and host-based routing

Auto Scaling should be added to the architecture so that the back end EC2 instances do not become a bottleneck. With Auto Scaling instances can be added and removed from the back end fleet as demand changes”

279
Q

“You have implemented API Gateway and enabled a cache for a specific stage. How can you control the cache to enhance performance and reduce load on back-end services?

  1. Configure the throttling feature
  2. Enable bursting
  3. Using time-to-live (TTL) settings
  4. Using CloudFront controls”
A

3..TTL

“Caches are provisioned for a specific stage of your APIs. Caching features include customisable keys and time-to-live (TTL) in seconds for your API data which enhances response times and reduces load on back-end services

You can throttle and monitor requests to protect your back-end, but the cache is used to reduce the load on the back-end”

280
Q

What is AWS PrivateLink for*?

A

“Using PrivateLink you can connect your VPC to supported AWS services, services hosted by other AWS accounts (VPC endpoint services), and supported AWS Marketplace partner services”

281
Q

“You have created an application in a VPC that uses a Network Load Balancer (NLB). The application will be offered in a service provider model for AWS principals in other accounts within the region to consume. Based on this model, what AWS service will be used to offer the service for consumption?

  1. IAM Role Based Access Control
  2. Route 53
  3. VPC Endpoint Services using AWS PrivateLink
  4. API Gateway”
A

3… to connect to other AWS accounts privately you need to use PrivateLink.

282
Q

“You are creating a design for an internal-only AWS service that uses EC2 instances to process information on S3 and store the results in DynamoDB. You need to allow access to several developers who will be testing code and need to apply security best practices to the architecture. Which of the security practices below are recommended? (choose 2)”

  1. “Store the access keys and secret IDs within the application
  2. Disable root API access keys and secret key
  3. Control user access through network ACLs
  4. Assign an IAM user for each EC2 instance
  5. Use bastion hosts to enforce control and visibility”
A

2.5…

“Best practices for securing operating systems and applications include:

  • Disable root API access keys and secret key
  • Restrict access to instances from limited IP ranges using Security Groups
  • Password protect the .pem file on user machines
  • Delete keys from the authorized_keys file on your instances when someone leaves your organization or no longer requires access
  • Rotate credentials (DB, Access Keys)
  • Regularly run least privilege checks using IAM user Access Advisor and IAM user Last Used Access Keys
  • Use bastion hosts to enforce control and visibility”
    *
283
Q

“There is expected to be a large increase in write intensive traffic to a website you manage that registers users onto an online learning program. You are concerned about writes to the database being dropped and need to come up with a solution to ensure this does not happen. Which of the solution options below would be the best approach to take?

  1. Update the application to write data to an SQS queue and provision additional EC2 instances to process the data and write it to the database
  2. Use RDS in a multi-AZ configuration to distribute writes across AZs
  3. Update the application to write data to an S3 bucket and provision additional EC2 instances to process the data and write it to the database
  4. Use CloudFront to cache the writes and configure the database as a custom origin”
A

1… SQS allows us to decouple the application and store messages waiting to be processed. Then we add some EC2 instances to process this queue

284
Q

“A company is generating large datasets with millions of rows that must be summarized by column. Existing business intelligence tools will be used to build daily reports. Which storage service meets the requirements?

  1. Amazon RedShift
  2. Amazon RDS
  3. Amazon ElastiCache
  4. Amazon DynamoDB”
A

1.Redshift

“Amazon RedShift uses columnar storage and is used for analyzing data using business intelligence tools (SQL)”

(DynamoDB is not columnar as it is NoSQL, RDS is better for OLTP work and not analytics and Elasticache is an in-memory caching service)

285
Q

“An EC2 status check on an EBS volume is showing as insufficient-data. What is the most likely explanation?

  1. The checks require more information to be manually entered
  2. The checks may still be in progress on the volume
  3. The checks have failed on the volume
  4. The volume does not have enough data on it to check properly”
A

2…. “The possible values are ok, impaired, warning, or insufficient-data. If all checks pass, the overall status of the volume is ok. If the check fails, the overall status is impaired. If the status is insufficient-data, then the checks may still be taking place on your volume at the time”

286
Q

“You have a three-tier web application running on AWS that utilizes Route 53, ELB, Auto Scaling and RDS. One of the EC2 instances that is registered against the ELB fails a health check. What actions will the ELB take in this circumstance?

  1. The ELB will terminate the instance that failed the health check
  2. The ELB will stop sending traffic to the instance that failed the health check
  3. The ELB will instruct Auto Scaling to terminate the instance and launch a replacement
  4. The ELB will update Route 53 by removing any references to the instance”
A
  1. stop sending traffic

“The ELB will simply stop sending traffic to the instance as it has determined it to be unhealthy

ELBs are not responsible for terminating EC2 instances.

The ELB does not send instructions to the ASG, the ASG has its own health checks and can also use ELB health checks to determine the status of instances”

287
Q

“A Solutions Architect is designing a static website that will use the zone apex of a DNS domain (e.g. example.com). The Architect wants to use the Amazon Route 53 service. Which steps should the Architect take to implement a scalable and cost-effective solution? (choose 2)

  1. Host the website on an Amazon EC2 instance with ELB and Auto Scaling, and map a Route 53 Alias record to the ELB endpoint
  2. Host the website using AWS Elastic Beanstalk, and map a Route 53 Alias record to the Beanstalk stack
  3. Host the website on an Amazon EC2 instance, and map a Route 53 Alias record to the public IP address of the EC2 instance
  4. Serve the website from an Amazon S3 bucket, and map a Route 53 Alias record to the website endpoint
  5. Create a Route 53 hosted zone, and set the NS records of the domain to use Route 53 name servers”
A

4,5

“To use Route 53 for an existing domain the Architect needs to change the NS records to point to the Amazon Route 53 name servers. This will direct name resolution to Route 53 for the domain name. The most cost-effective solution for hosting the website will be to use an Amazon S3 bucket. To do this you create a bucket using the same name as the domain name (e.g. example.com) and use a Route 53 Alias record to map to it

Using an EC2 instance instead of an S3 bucket would be more costly so that rules out 2 options that explicitly mention EC3

Elastic Beanstalk provisions EC2 instances so again this would be a more costly option”

288
Q

“You have been asked to recommend the best AWS storage solution for a client. The client requires a storage solution that provide a mounted file system for a Big Data and Analytics application. The client’s requirements include high throughput, low latency, read-after-write consistency and the ability to burst up to multiple GB/s for short periods of time. Which AWS service can meet this requirement?

  1. EBS
  2. S3
  3. EFS
  4. DynamoDB”
A

EFS.. “EFS is a fully-managed service that makes it easy to set up and scale file storage in the Amazon Cloud. EFS is good for big data and analytics, media processing workflows, content management, web serving, home directories etc.. EFS uses the NFSv4.1 protocol which is a protocol for mounting file systems (similar to Microsoft’s SMB)”

(DynamoDB is only good with OLTP)

289
Q

What is the difference between an IAM role and an IAM user?

A

An IAM user has permanent long-term credentials and is used to directly interact with AWS services. An IAM role does not have any credentials and cannot make direct requests to AWS services. IAM roles are meant to be assumed by authorized entities, such as IAM users, applications, or an AWS service such as EC2.

290
Q

When should I use an IAM user, IAM group, or IAM role?

A

An IAM user has permanent long-term credentials and is used to directly interact with AWS services. An IAM group is primarily a management convenience to manage the same set of permissions for a set of IAM users. An IAM role is an AWS Identity and Access Management (IAM) entity with permissions to make AWS service requests. IAM roles cannot make direct requests to AWS services; they are meant to be assumed by authorized entities, such as IAM users, applications, or AWS services such as EC2. Use IAM roles to delegate access within or between AWS accounts.

291
Q

What problem does IAM roles for EC2 instances solve?

A

IAM roles for EC2 instances simplifies management and deployment of AWS access keys to EC2 instances. Using this feature, you associate an IAM role with an instance. Then your EC2 instance provides the temporary security credentials to applications running on the instance, and the applications can use these credentials to make requests securely to the AWS service resources defined in the role.

= IAM roles for EC2 instances enables your applications running on EC2 to make requests to AWS services such as Amazon S3, Amazon SQS, and Amazon SNS without you having to copy AWS access keys to every instance

292
Q

What are temporary security credentials?

A

Temporary security credentials consist of the AWS access key ID, secret access key, and security token. Temporary security credentials are valid for a specified duration and for a specific set of permissions. Temporary security credentials are sometimes simply referred to as tokens.

(default 12 hours, min 15 minutes, max 36 hours)

293
Q

What is identity federation?

A

AWS Identity and Access Management (IAM) supports identity federation for delegated access to the AWS Management Console or AWS APIs. With identity federation, external identities are granted secure access to resources in your AWS account without having to create IAM users. These external identities can come from your corporate identity provider (such as Microsoft Active Directory or from the AWS Directory Service) or from a web identity provider (such as Amazon Cognito, Login with Amazon, Facebook, Google, or any OpenID Connect-compatible provider).

294
Q

What are federated users?

A

Federated users (external identities) are users you manage outside of AWS in your corporate directory, but to whom you grant access to your AWS account using temporary security credentials. They differ from IAM users, which are created and maintained in your AWS account.

295
Q

What is web identity federation?

A

Web identity federation allows you to create AWS-powered mobile apps that use public identity providers (such as Amazon Cognito, Login with Amazon, Facebook, Google, or any OpenID Connect-compatible provider) for authentication. With web identity federation, you have an easy way to integrate sign-in from public identity providers (IdPs) into your apps without having to write any server-side code and without distributing long-term AWS security credentials with the app.

296
Q

How do I enable web identity federation with accounts from public IdPs?

A

For best results, use Amazon Cognito as your identity broker for almost all web identity federation scenarios.

297
Q

“A developer is writing some code and wants to work programmatically with IAM. Which feature of IAM allows you direct access to the IAM web service using HTTPS to call service actions and what is the method of authentication that must be used? (choose 2)

  1. Query API
  2. OpenID Connect
  3. API Gateway
  4. Access key ID and secret access key
  5. IAM role”
A

1,4.

AWS recommends that you use the AWS SDKs to make programmatic API calls to IAM. However, you can also use the IAM Query API to make direct calls to the IAM web service.

An access key ID and secret access key must be used for authentication when using the Query API.

298
Q

“You are designing the disk configuration for an EC2 instance. The instance will be running an application that requires heavy read/write IOPS. You need to provision a single volume that is 500 GiB in size and needs to support 20,000 IOPS. What EBS volume type will you select?

  1. EBS General Purpose SSD
  2. EBS Provisioned IOPS SSD
  3. EBS Throughput Optimized HDD
  4. EBS General Purpose SSD in a RAID 1 configuration”
A

2…“This is simply about understanding the performance characteristics of the different EBS volume types. The only EBS volume type that supports over 10,000 IOPS is Provisioned IOPS SSD”

(Provisioned IOPS for… +10,000 IOPS, up to 32,000 IOPS per volume, up to 50 IOPS per GiB)

299
Q

What are the regional limits for EC2 instances?

A
  • Soft limit of 20 on-demand instances per region
  • 300 TiB of aggregate PIOPS volumes per region and 300,000 aggregate PIOPS
300
Q

“You have created a new VPC and setup an Auto Scaling Group to maintain a desired count of 2 EC2 instances. The security team has requested that the EC2 instances be located in a private subnet. To distribute load, you have to also setup an Internet-facing Application Load Balancer (ALB). With your security team’s wishes in mind what else needs to be done to get this configuration to work? (choose 2)

  1. Attach an Internet Gateway to the private subnets
  2. Add an Elastic IP address to each EC2 instance in the private subnet
  3. For each private subnet create a corresponding public subnet in the same AZ
  4. Add a NAT gateway to the private subnet
  5. Associate the public subnets with the ALB”
A

“ELB nodes have public IPs and route traffic to the private IP addresses of the EC2 instances. You need one public subnet in each AZ where the ELB is defined and the private subnets are located”

301
Q

‘“An application you are designing will gather data from a website hosted on an EC2 instance and write the data to an S3 bucket. The application will use API calls to interact with the EC2 instance and S3 bucket. What strategy would you implement for access control? (choose 2)

  1. Create an IAM policy
  2. Use key pairs
  3. Grant programmatic access
  4. Create a bucket policy
  5. Grant AWS Management Console access”
A

1,3

policies are documents that define permissions and can be applied to users, groups, and roles.

=> within an IAM policy you can grant either programmatic access or AWS Management Console access to Amazon S3 resources.

302
Q

What are key pairs used for?

A

to access EC2 instances

303
Q

“An application you manage stores encrypted data in S3 buckets. You need to be able to query the encrypted data using SQL queries and write the encrypted results back to the S3 bucket. As the data is sensitive you need to implement fine-grained control over access to the S3 bucket.What combination of services represent the BEST options support these requirements? (choose 2)”

  1. “Use Athena for querying the data and writing the results back to the bucket
  2. Use IAM policies to restrict access to the bucket
  3. Use bucket ACLs to restrict access to the bucket
  4. Use AWS Glue to extract the data, analyze it, and load it back to the S3 bucket
  5. Use the AWS KMS API to query the encrypted data, and the S3 API for writing the results”
A

1,2

“Athena also allows you to easily query encrypted data stored in Amazon S3 and write encrypted results back to your S3 bucket. Both, server-side encryption and client-side encryption are supported

With IAM policies, you can grant IAM users fine-grained control to your S3 buckets, and is preferable to using bucket ACLs”

304
Q

“You have been asked to come up with a solution for providing single sign-on to existing staff in your company who manage on-premise web applications and now need access to the AWS management console to manage resources in the AWS cloud.Which product combinations provide the best solution to achieve this requirement?

  1. Use your on-premise LDAP directory with IAM
  2. Use IAM and MFA
  3. Use the AWS Secure Token Service (STS) and SAML”
  4. use IAM and amazon cognito
A

3…

“Single sign-on using federation allows users to login to the AWS console without assigning IAM credentials

The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for IAM users or for users that you authenticate (such as federated users from an on-premise directory)

Federation (typically Active Directory) uses SAML 2.0 for authentication and grants temporary access based on the users AD credentials. The user does not need to be a user in IAM”

305
Q

*What is Amazon Cognito used for?

A

to authenticate users to web and mobile apps

(not for providing single sign-on between on-premises directories and AWS management console… here we use Federation (STS and SAML)

306
Q

What are some of the most important AWS Serverless services?

A
  • API Gateway
  • Lambda
  • S3
  • DynamoDB
  • SNS
  • SQS
  • Kinesis
307
Q

“You would like to store a backup of an Amazon EBS volume on Amazon S3. What is the easiest way of achieving this?

  1. Create a snapshot of the volume
  2. Write a custom script to automatically copy your data to an S3 bucket
  3. You don’t need to do anything, EBS volumes are automatically backed up by default
  4. Use SWF to automatically create a backup of your EBS volumes and then upload them to an S3 bucket”
A

1..snapshots capture a point-in-time state of an instance and snapshots of EBS volumes are automatically stored in S3

308
Q

“An Amazon CloudWatch alarm recently notified you that the load on a DynamoDB table you are running is getting close to the provisioned capacity for writes. The DynamoDB table is part of a two-tier customer-facing application and is configured using provisioned capacity. You are concerned about what will happen if the limit is reached but need to wait for approval to increase the WriteCapacityUnits value assigned to the table. What will happen if the limit for the provisioned capacity for writes is reached?

  1. DynamoDB scales automatically so there’s no need to worry
  2. The requests will be throttled, and fail with an HTTP 400 code (Bad Request) and a ProvisionedThroughputExceededException
  3. The requests will be throttled, and fail with an HTTP 503 code (Service Unavailable)
  4. The requests will succeed, and an HTTP 200 status code will be returned
A

Answer: 2

DynamoDB can throttle requests that exceed the provisioned throughput for a table. When a request is throttled it fails with an HTTP 400 code (Bad Request) and a ProvisionedThroughputExceeded exception (not a 503 or 200 status code)

When using the provisioned capacity pricing model DynamoDB does not automatically scale. DynamoDB can automatically scale when using the new on[…]”

309
Q

“A Solutions Architect has setup a VPC with a public subnet and a VPN-only subnet. The public subnet is associated with a custom route table that has a route to an Internet Gateway. The VPN-only subnet is associated with the main route table and has a route to a virtual private gateway. “The Architect has created a new subnet in the VPC and launched an EC2 instance in it. However, the instance cannot connect to the Internet. What is the MOST likely reason?

  1. There is no NAT Gateway available in the new subnet so Internet connectivity is not possible
  2. The subnet has been automatically associated with the main route table which does not have a route to the Internet
  3. The new subnet has not been associated with a route table
  4. The Internet Gateway is experiencing connectivity problems”
A

2…when you create a new subnet, it is automatically associated with the main route table. Therefore, the EC2 instance will not have a route to the Internet in this example. The architect should associate the new subnet with the custom route table.

310
Q

“An issue has been raised to you whereby a client is concerned about the permissions assigned to his containerized applications. The containers are using the EC2 launch type. The current configuration uses the container instance’s IAM roles for assigning permissions to the containerized applications. “The client has asked if it’s possible to implement more granular permissions so that some applications can be assigned more restrictive permissions?

  1. This cannot be changed as IAM roles can only be linked to container instances
  2. This can be achieved using IAM roles for tasks, and splitting the containers according to the permissions required to different task definition profiles
  3. This can be achieved by configuring a resource-based policy for each application
  4. This can only be achieved using the Fargate launch type
A

Answer: 2

With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task. Using this feature you can achieve the required outcome by using IAM roles for tasks and splitting the containers according to the permissions required to different task profiles.”

311
Q

“There is a problem with an EC2 instance that was launched by AWS Auto Scaling. The EC2 status checks have reported that the instance is “Impaired”. What action will AWS Auto Scaling take?

  1. It will launch a new instance immediately“and then mark the impaired one for replacement
  2. Auto Scaling will wait for 300 seconds to give the instance a chance to recover
  3. It will mark the instance for termination, terminate it, and then launch a replacement
  4. Auto Scaling performs its own status checks and does not integrate with EC2 status checks
A

3…“If any health check returns an unhealthy status the instance will be terminated. Unlike AZ rebalancing, termination of unhealthy instances happens first, then Auto Scaling attempts to launch new instances to replace terminated instances

AS will not launch a new instance immediately as it always terminates unhealthy instance before launching a replacement”

312
Q

“You are a Solutions Architect at Digital Cloud Training and have been assigned the task of moving some sensitive documents into the AWS cloud. You need to ensure that the security of the documents is maintained. Which AWS features can help ensure that the sensitive documents are secured on the AWS cloud? (choose 2)

  1. EBS encryption with Customer Managed Keys
  2. S3 Server-Side Encryption
  3. IAM Access Policy
  4. EBS snapshots”
  5. S3 cross replication
A

1,2

“It is not specified what types of documents are being moved into the cloud or what services they will be placed on. Therefore we can assume that options include S3 and EBS. Both of these services provide native encryption functionality to ensure security of the sensitive documents. With EBS you can use KMS-managed or customer-managed encryption keys. With S3 you can use client-side or server-side encryption

(IAM access policies are not used for controlling encryption”)

313
Q

According to AWS best practice, where should you launch a database?

A

in a private subnet of your VPC

314
Q

“You are creating a series of environments within a single VPC. You need to implement a system of categorization that allows for identification of EC2 resources by business unit, owner, or environment. Which AWS feature allows you to do this?

  1. Metadata
  2. Parameters
  3. Tags
  4. Custom filters”
A

3 tags… a tag is a label that you assign to an AWS resource. Each tag consists of a key and an optional value, both of which you define. TAGS enable you to categorize your AWS resources in different ways, for example, by purpose, owner or environment

315
Q

“You created a second ENI (eth1) interface when launching an EC2 instance. You would like to terminate the instance and have not made any changes. What will happen to the attached ENIs?

  1. eth1 will persist but eth0 will be terminated
  2. eth1 will be terminated, but eth0 will persist
  3. Both eth0 and eth1 will be terminated with the instance
  4. Both eth0 and eth1 will persist
A

Answer: 1

By default Eth0 is the only Elastic Network Interface (ENI) created with an EC2 instance when launched. You can add additional interfaces to EC2 instances (number dependent on instances family/type). Default interfaces are terminated with instance termination. Manually added interfaces are not terminated by default”

316
Q

“An EC2 instance in an Auto Scaling Group is having some issues that are causing the ASG to launch new instances based on the dynamic scaling policy. You need to troubleshoot the EC2 instance and prevent the ASG from launching new instances temporarily. What is the best method to accomplish this? (choose 2)

  1. Disable the dynamic scaling policy
  2. Suspend the scaling processes responsible for launching new instances
  3. Place the EC2 instance that is experiencing issues into the Standby state
  4. Disable the launch configuration associated with the EC2 instance
  5. Remove the EC2 instance from the Target Group
A

2,3..

You can suspend and then resume one or more of the scaling processes for your Auto Scaling group. This can be useful when you want to investigate a configuration problem or other issue with your web application and then make changes to your application, without invoking the scaling processes. You can manually move an instance from an ASG and put it in the standby state

Instances in standby state are still managed by Auto Scaling, are charged as normal, and do not count towards available EC2 instance for workload/application use. Auto scaling does not perform health checks on instances[…]”

317
Q

When to use a failover routing policy?

A

for active/passive configurations..

318
Q

“Your organization has a data lake on S3 and you need to find a solution for performing in-place queries of the data assets in the data lake. The requirement is to perform both data discovery and SQL querying, and complex queries from a large number of concurrent users using BI tools. What is the BEST combination of AWS services to use in this situation? (choose 2)

  1. AWS Lambda for the complex queries
  2. Amazon Athena for the ad hoc SQL querying
  3. RedShift Spectrum for the complex queries
  4. AWS Glue for the ad hoc SQL querying
A

Answer: 2,3

Performing in-place queries on a data lake allows you to run sophisticated analytics queries directly on the data in S3 without having to load it into a data warehouse

You can use both Athena and Redshift Spectrum against the same data assets. You would typically use Athena for ad hoc data discovery and SQL querying, and then use Redshift Spectrum for more complex queries and scenarios where a large number of data lake users want to run concurrent BI and reporting workloads”

(Lambda is good for functions but not analytics queries and AWS Glue is an ETL service)

319
Q

“You are configuring Route 53 for a customer’s website. Their web servers are behind an Internet-facing ELB. What record set would you create to point the customer’s DNS zone apex record at the ELB?”

  1. “Create a PTR record pointing to the DNS name of the load balancer
  2. Create an A record pointing to the DNS name of the load balancer
  3. Create an A record that is an Alias, and select the ELB DNS as a target
  4. Create a CNAME record that is an Alias, and select the ELB DNS as a target”
A

3.. “An Alias record can be used for resolving apex or naked domain names (e.g. example.com). You can create an A record that is an Alias that uses the customer’s website zone apex domain name and map it to the ELB DNS name”

320
Q

“You just attempted to restart a stopped EC2 instance and it immediately changed from a pending state to a terminated state. What are the most likely explanations? (choose 2)

  1. You’ve reached your EBS volume limit
  2. The AMI is unsupported
  3. An EBS snapshot is corrupt
  4. AWS does not currently have enough available On-Demand capacity to service your request
  5. You have reached the limit on the number of instances that you can launch in a region”
A

1,3..

“The following are a few reasons why an instance might immediately terminate:

  • You’ve reached your EBS volume limit
  • An EBS snapshot is corrupt
  • The root EBS volume is encrypted and you do not have permissions to access the KMS key for decryption”

AWS does not have capacity available

321
Q

“You need to create an EBS volume to mount to an existing EC2 instance for an application that will be writing structured data to the volume. The application vendor suggests that the performance of the disk should be up to 3 IOPS per GB. You expect the capacity of the volume to grow to 2TB. Taking into account cost effectiveness, which EBS volume type would you select?

  1. General Purpose (GP2)
  2. Provisioned IOPS (Io1)
  3. Cold HDD (SC1)
  4. Throughput Optimized HDD (ST1)”
A

1… “SSD, General Purpose (GP2) provides enough IOPS to support this requirement and is the most economical option that does. Using Provisioned IOPS would be more expensive and the other two options do not provide an SLA for IOPS”

322
Q

What are the limits of General Purpose EBS?

A
  • 3 IOPS per GiB
  • up to 16,000 IOPS
  • volume size 1GB to 16 GB
323
Q

What are the limits of Provisioned IOPS?

A
  • 50 IOPS per GiB
  • up to 64,000 IOPS
  • volume size 4GB to 16 GB
324
Q

“You are discussing EC2 with a colleague and need to describe the differences between EBS-backed instances and Instance store-backed instances. Which of the statements below would be valid descriptions? (choose 2)

  1. On an EBS-backed instance, the default action is for the root EBS volume to be deleted upon termination
  2. EBS volumes can be detached and reattached to other EC2 instances
  3. Instance store volumes can be detached and reattached to other EC2 instances
  4. For both types of volume rebooting the instances will result in data loss
  5. By default, root volumes for both types will be retained on termination unless you configured otherwise”
A

1,2 EBS-backed instances, the default is to delete the root EBS volume upon termination.

EBS volumes can be detached and attached again without data being lost (instance stores cannot)

325
Q

“An important application you manage uses an Elastic Load Balancer (ELB) to distribute incoming requests amongst a fleet of EC2 instances. You need to ensure any operational issues are identified. Which of the statements below are correct about monitoring of an ELB? (choose 2)

  1. Information is sent to CloudWatch every minute if there are active requests
  2. Access logs can identify requester, IP, and request type
  3. Access logs are enabled by default
  4. CloudWatch metrics can be logged to an S3 bucket
  5. CloudTrail can be used to capture application logs
A

Answer: 1,2

Information is sent by the ELB to CloudWatch every 1 minute when requests are active. Can be used to trigger SNS notifications”

“Access Logs are disabled by default. Includes information about the clients (not included in CloudWatch metrics) including identifying the requester, IP, request type etc. Access logs can be optionally stored and retained in S3”

326
Q

“A Solutions Architect is creating a design for a multi-tiered serverless application. Which two services form the application facing services from the AWS serverless infrastructure? (choose 2)

  1. Amazon ECS
  2. API Gateway
  3. Elastic Load Balancer
  4. AWS Cognito
  5. AWS Lambda
A

2,5 The only application services here are API Gateway and Lambda and these are considered to be serverless services”

327
Q

How does Auto Scaling perform rebalancing? in which order

A

“Auto Scaling can perform rebalancing when it finds that the number of instances across AZs is not balanced. Auto Scaling rebalances by launching new EC2 instances in the AZs that have fewer instances first, only then will it start terminating instances in AZs that had more instances”

328
Q

“You need to run a production batch process quickly that will use several EC2 instances. The process cannot be interrupted and must be completed within a short time period. What is likely to be the MOST cost-effective choice of EC2 instance type to use for this requirement?

  1. Reserved instances
  2. Spot instances
  3. On-demand instances
  4. Flexible instances”
A

On-demand… you need it quickly, cost-effectively and without interruptions.

(spot = interruptions, reserved = for longer requirements with 1-3 year contracts, flexible does not exist)

329
Q

“A new application you are deploying uses Docker containers. You are creating a design for an ECS cluster to host the application. Which statements about ECS clusters are correct? (choose 2)

  1. “ECS Clusters are a logical grouping of container instances that you can place tasks on
  2. Clusters can contain tasks using the Fargate and EC2 launch type
  3. Each container instance may be part of multiple clusters at a time
  4. Clusters are AZ specific
  5. Clusters can contain a single container instance type
A

Answer: 1,2

  • ECS Clusters are a logical grouping of container instances the you can place tasks on
  • Clusters can contain tasks using BOTH the Fargate and EC2 launch type
  • Each container instance may only be part of one cluster at a time
  • Clusters are region specific
  • For clusters with the EC2 launch type clusters can contain different container instance types”
330
Q

“You are putting together the design for a new retail website for a high-profile company. The company has previously been the victim of targeted distributed denial-of-service (DDoS) attacks and have requested that you ensure the design includes mitigation techniques.

Which of the following are the BEST techniques to help ensure the availability of the services is not compromized in an attack? (choose 2)

  1. Use Spot instances to reduce the cost impact in case of attack
  2. Use CloudFront for distributing both static and dynamic content
  3. Use Placement Groups to ensure high bandwidth and low latency
  4. Configure Auto Scaling with a high maximum number of instances to ensure it can scale accordingly
  5. Use encryption on your EBS volumes”
A

2,4…

“CloudFront distributes traffic across multiple edge locations and filters requests to ensure that only valid HTTP(S) requests will be forwarded to backend hosts. CloudFront also supports geoblocking, which you can use to prevent requests from particular geographic locations from being served

ELB automatically distributes incoming application traffic across multiple targets, such as Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, and IP addresses, and multiple Availability Zones, which minimizes the risk of overloading a single resource

ELB, like CloudFront, only supports valid TCP requests, so DDoS attacks such as UDP and SYN floods are not able to reach EC2 instances

ELB also offers a single point of management and can serve as a line of defense between the internet and your backend, private EC2 instances”

331
Q

“A membership website your company manages has become quite popular and is gaining members quickly. The website currently runs on EC2 instances with one web server instance and one DB instance running MySQL. You are concerned about the lack of high-availability in the current architecture.”“What can you do to easily enable HA without making major changes to the architecture?

  1. Create a Read Replica in another AZ
  2. Enable Multi-AZ for the MySQL instance
  3. Install MySQL on an EC2 instance in the same AZ and enable replication
  4. Install MySQL on an EC2 instance in another AZ and enable replication”
A

4… “If you are installing MySQL on an EC2 instance you cannot enable read replicas or multi-AZ. Instead you would need to use Amazon RDS with a MySQL DB engine to use these features

Migrating to RDS would entail a major change to the architecture so is not really feasible. In this example it will therefore be easier to use the native HA features of MySQL rather than to migrate to RDS. You would want to place the second MySQL DB instance in another AZ to enable high availability and fault tolerance”

332
Q

“One of your clients is a banking regulator and they run an application that provides auditing information to the general public using AWS Lambda and API Gateway. A Royal Commission has exposed some suspect lending practices and this has been picked up by the media and raised concern amongst the general public. With some major upcoming announcements expected you’re concerned about traffic spikes hitting the client’s application. How can you protect the backend systems from traffic spikes?

  1. Use ElastiCache as the front-end to cache frequent queries
  2. Use a CloudFront Edge Cache”
  3. “Enable throttling limits and result caching in API Gateway
  4. Put the APIs in an S3 bucket and publish as a static website using CloudFront
A

3….You can throttle and monitor requests to protect your backend. Resiliency through throttling rules is based on the number of requests per second for each HTTP method (GET, PUT). Throttling can be configured at multiple levels including Global and Service Call

API Gateway is the front-end component of this application therefore that is where you need to implement the controls. You cannot use CloudFront or ElastiCache to cache APIs. You also cannot put APIs in a bucket and publish as a static website

333
Q

“You would like to implement a method of automating the creation, retention, and deletion of backups for the EBS volumes in your VPC. What is the easiest way to automate these tasks using AWS tools?

  1. Create a scheduled job and run the AWS CLI command “create-snapshot” to take backups of the EBS volumes
  2. Create a scheduled job and run the AWS CLI command “create-backup” to take backups of the EBS volumes
  3. Configure EBS volume replication to create a backup on S3
  4. Use the EBS Data Lifecycle Manager (DLM) to manage snapshots of the volumes”
A

4.. “You backup EBS volumes by taking snapshots. This can be automated via the AWS CLI command “create-snapshot”. However the question is asking for a way to automate not just the creation of the snapshot but the retention and deletion too. The EBS Data Lifecycle Manager (DLM) is a new feature that can automate all of these actions for you and this can be performed centrally from within the management console”

334
Q

“An application has been deployed in a private subnet within your VPC and an ELB will be used to accept incoming connections. You need to setup the configuration for the listeners on the ELB. When using a Classic Load Balancer, which of the following combinations of listeners support the proxy protocol? (choose 2)

  1. Front-End – TCP & Back-End – TCP
  2. Front-End – SSL & Back-End – SSL
  3. Front-End – SSL & Back-End - TCP
  4. Front-End – HTTP & Back-End SSL
  5. Front-End – TCP & Back-End SSL
A

Answer: 1,3

The proxy protocol only applies to L4 and the back-end listener must be TCP for proxy protocol

When using the proxy protocol the front-end listener can be either TCP or SSL”

335
Q

“A new application you are designing will store data in an Amazon Aurora MySQL DB. You are looking for a way to enable regional disaster recovery capabilities with fast replication and fast failover. Which of the following options is the BEST solution?”

“with fast replication and fast failover. Which of the following options is the BEST solution?

  1. Use Amazon Aurora Global Database
  2. Enable Multi-AZ for the Aurora DB
  3. Create a cross-region Aurora Read Replica
  4. Create an EBS backup of the Aurora volumes and use cross-region replication to copy the snapshot
A

1… Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages. Aurora Global Database uses storage-based replication with typical latency of less than 1 second, using dedicated infrastructure that leaves your database fully available to serve application workloads. In the unlikely event of a regional degradation or outage, one of the secondary regions can be promoted to full read/write capabilities in less than 1 minute.

You can create an Amazon Aurora MySQL DB cluster as a Read Replica in a different AWS Region than the source DB cluster. Taking this approach can improve your disaster recovery capabilities, let you scale read operations into an AWS “Region to another. However, this solution would not provide the fast storage replication and fast failover capabilities of the Aurora Global Database and is therefore not the best option

Enabling Multi-AZ for the Aurora DB would provide AZ-level resiliency within the region not across regions”

336
Q

“When using throttling controls with API Gateway what happens when request submissions exceed the steady-state request rate and burst limits?

  1. The requests will be buffered in a cache until the load reduces
  2. API Gateway fails the limit-exceeding requests and returns “429 Too Many Requests” error responses to the client
  3. API Gateway fails the limit-exceeding requests and returns “500 Internal Server Error” error responses to the client
  4. API Gateway drops the requests and does not return a response to the client
A

Answer: 2,….. You can throttle and monitor requests to protect your backend. Resiliency through throttling rules based on the number of requests per second for each HTTP method (GET, PUT). Throttling can be configured at multiple levels including Global and Service Call

When request submissions exceed the steady-state request rate and burst limits, API Gateway fails the limit-exceeding requests and returns 429 Too Many Requests error responses to the client”

337
Q

“An Auto Scaling Group in which you have four EC2 instances running is becoming heavily loaded. The instances are using the m4.large instance type and the CPUs are hitting 80%. Due to licensing constraints you don’t want to add additional instances to the ASG so you are planning to upgrade to the m4.xlarge instance type instead. You need to make “the change immediately but don’t want to terminate the existing instances. How can you perform the change without causing the ASG to launch new instances? (choose 2)

  1. Stop each instance and change its instance type. Start the instance again
  2. Create a new launch configuration with the new instance type specified
  3. On the ASG suspend the Auto Scaling process until you have completed the change
  4. Edit the existing launch configuration and specify the new instance type
  5. Change the instance type and then restart the instance”
A

1,3..

“When you resize an instance, you must select an instance type that is compatible with the configuration of the instance. You must stop your Amazon EBS–backed instance before you can change its instance type

You can suspend and then resume one or more of the scaling processes for your Auto Scaling group. Suspending scaling processes can be useful when you want to investigate a configuration problem or other issue with your web application and then make changes to your application, without invoking the scaling processes

You do not need to create a new launch configuration and you cannot edit an existing launch configuration

You cannot change an instance type without first stopping the instance”

338
Q

“What feature of Amazon Cognito is used to obtain temporary credentials to access AWS services?

  1. User Pools
  2. Identity Pools
  3. SAML Identity Providers
  4. Key Pairs”
A

2.. “With an identity pool, users can obtain temporary AWS credentials to access AWS services, such as Amazon S3 and DynamoDB”