Udemy Exam 5 Flashcards

1
Q

A junior developer is learning to build websites using HTML, CSS, and JavaScript. He has created a static website and then deployed it on Amazon S3. Now he can’t seem to figure out the endpoint for his super cool website.

As a solutions architect, can you help him figure out the allowed formats for the Amazon S3 website endpoints? (Select two)

A

http: //bucket-name.s3-website.Region.amazonaws.com
http: //bucket-name.s3-website-Region.amazonaws.com

  • To host a static website on Amazon S3, you configure an Amazon S3 bucket for website hosting and then upload your website content to the bucket.
  • When you configure a bucket as a static website, you enable static website hosting, set permissions, and add an index document.
  • Depending on your website requirements, you can also configure other options, including redirects, web traffic logging, and custom error documents.
  • When you configure your bucket as a static website, the website is available at the AWS Region-specific website endpoint of the bucket.

Depending on your Region, your Amazon S3 website endpoints follow one of these two formats.

  1. s3-website dash (-) Region ‐ http://bucket-name.s3-website.Region.amazonaws.com
  2. s3-website dot (.) Region ‐ http://bucket-name.s3-website-Region.amazonaws.com

These URLs return the default index document that you configure for the website.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have built an application that is deployed with an Elastic Load Balancer and an Auto Scaling Group. As a Solutions Architect, you have configured aggressive CloudWatch alarms, making your Auto Scaling Group (ASG) scale in and out very quickly, renewing your fleet of Amazon EC2 instances on a daily basis. A production bug appeared two days ago, but the team is unable to SSH into the instance to debug the issue, because the instance has already been terminated by the ASG. The log files are saved on the EC2 instance.

How will you resolve the issue and make sure it doesn’t happen again?

A

Install a CloudWatch Logs agents on the EC2 instances to send logs to CloudWatch

  • You can use the CloudWatch Logs agent installer on an existing EC2 instance to install and configure the CloudWatch Logs agent.
  • After installation is complete, logs automatically flow from the instance to the log stream you create while installing the agent.
  • The agent confirms that it has started and it stays running until you disable it.

Here, the natural and by far the easiest solution would be to use the CloudWatch Logs agents on the EC2 instances to automatically send log files into CloudWatch, so we can analyze them in the future easily should any problem arise.

  • To control whether an Auto Scaling group can terminate a particular instance when scaling in, use instance scale-in protection.
  • You can enable the instance scale-in protection setting on an Auto Scaling group or on an individual Auto Scaling instance.
  • When the Auto Scaling group launches an instance, it inherits the instance scale-in protection setting of the Auto Scaling group.
  • You can change the instance scale-in protection setting for an Auto Scaling group or an Auto Scaling instance at any time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A big data analytics company is using Kinesis Data Streams (KDS) to process IoT data from the field devices of an agricultural sciences company. Multiple consumer applications are using the incoming data streams and the engineers have noticed a performance lag for the data delivery speed between producers and consumers of the data streams.

As a solutions architect, which of the following would you recommend for improving the performance for the given use-case?

A

Amazon Kinesis Data Streams (KDS)

  • Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service.
  • KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.
  • By default, the 2MB/second/shard output is shared between all of the applications consuming data from the stream.
  • You should use enhanced fan-out if you have multiple consumers retrieving data from a stream in parallel.
  • With enhanced fan-out developers can register stream consumers to use enhanced fan-out and receive their own 2MB/second pipe of read throughput per shard, and this throughput automatically scales with the number of shards in a stream.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An application hosted on Amazon EC2 contains sensitive personal information about all its customers and needs to be protected from all types of cyber-attacks. The company is considering using the AWS Web Application Firewall (WAF) to handle this requirement.

Can you identify the correct solution leveraging the capabilities of WAF?

A

Create a CloudFront distribution for the application on Amazon EC2 instances. Deploy AWS WAF on Amazon CloudFront to provide the necessary safety measures

  • When you use AWS WAF with CloudFront, you can protect your applications running on any HTTP webserver, whether it’s a webserver that’s running in Amazon Elastic Compute Cloud (Amazon EC2) or a web server that you manage privately.
  • You can also configure CloudFront to require HTTPS between CloudFront and your own webserver, as well as between viewers and CloudFront.

AWS WAF is tightly integrated with Amazon CloudFront and the Application Load Balancer (ALB), services that AWS customers commonly use to deliver content for their websites and applications.

When you use AWS WAF on Amazon CloudFront, your rules run in all AWS Edge Locations, located around the world close to your end-users.

  • This means security doesn’t come at the expense of performance.
  • Blocked requests are stopped before they reach your web servers.
  • When you use AWS WAF on Application Load Balancer, your rules run in the region and can be used to protect internet-facing as well as internal load balancers.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A leading media company wants to do an accelerated online migration of hundreds of terabytes of files from their on-premises data center to Amazon S3 and then establish a mechanism to access the migrated data for ongoing updates from the on-premises applications.

As a solutions architect, which of the following would you select as the MOST performant solution for the given use-case?

A

Use AWS DataSync to migrate existing data to Amazon S3 and then use File Gateway to retain access to the migrated data for ongoing updates from the on-premises applications

  • AWS DataSync is an online data transfer service that simplifies, automates, and accelerates copying large amounts of data to and from AWS storage services over the internet or AWS Direct Connect.
  • AWS DataSync fully automates and accelerates moving large active datasets to AWS, up to 10 times faster than command-line tools.

It is natively integrated with Amazon S3, Amazon EFS, Amazon FSx for Windows File Server, Amazon CloudWatch, and AWS CloudTrail, which provides seamless and secure access to your storage services, as well as detailed monitoring of the transfer.

  • DataSync uses a purpose-built network protocol and scale-out architecture to transfer data.
  • A single DataSync agent is capable of saturating a 10 Gbps network link.

DataSync fully automates the data transfer.

  • It comes with retry and network resiliency mechanisms, network optimizations, built-in task scheduling, monitoring via the DataSync API and Console, and CloudWatch metrics, events, and logs that provide granular visibility into the transfer process.

DataSync performs data integrity verification both during the transfer and at the end of the transfer.

  • AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage.
  • The service provides three different types of gateways – Tape Gateway, File Gateway, and Volume Gateway – that seamlessly connect on-premises applications to cloud storage, caching data locally for low-latency access.
  • File gateway offers SMB or NFS-based access to data in Amazon S3 with local caching.

The combination of DataSync and File Gateway is the correct solution.

  • AWS DataSync enables you to automate and accelerate online data transfers to AWS storage services.
  • File Gateway then provides your on-premises applications with low latency access to the migrated data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company runs its EC2 servers behind an Application Load Balancer along with an Auto Scaling group. The engineers at the company want to be able to install proprietary tools on each instance and perform a pre-activation status check of these tools whenever an instance is provisioned because of a scale-out event from an auto-scaling policy.

Which of the following options can be used to enable this custom action?

A

Use the Auto Scaling group lifecycle hook to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check

  • An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management.

Auto Scaling group lifecycle hooks

  • Enable you to perform custom actions as the Auto Scaling group launches or terminates instances.
  • Lifecycle hooks enable you to perform custom actions by pausing instances as an Auto Scaling group launches or terminates them.
  • When an instance is paused, it remains in a wait state either until you complete the lifecycle action using the complete-lifecycle-action command or the CompleteLifecycleAction operation, or until the timeout period ends (one hour by default).

For example, you could install or configure software on newly launched instances, or download log files from an instance before it terminates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A mobile chat application uses DynamoDB as its database service to provide low latency chat updates. A new developer has joined the team and is reviewing the configuration settings for DynamoDB which have been tweaked for certain technical requirements. CloudTrail service has been enabled on all the resources used for the project. Yet, DynamoDB encryption details are nowhere to be found.

Which of the following options can explain the root cause for the given issue?

A

By default, all DynamoDB tables are encrypted under an AWS owned customer master key (CMK), which do not write to CloudTrail logs

  • AWS owned CMKs are a collection of CMKs that an AWS service owns and manages for use in multiple AWS accounts.
  • Although AWS owned CMKs are not in your AWS account, an AWS service can use its AWS owned CMKs to protect the resources in your account.
  • You do not need to create or manage the AWS owned CMKs.
  • However, you cannot view, use, track, or audit them.
  • You are not charged a monthly fee or usage fee for AWS owned CMKs and they do not count against the AWS KMS quotas for your account.
  • The key rotation strategy for an AWS owned CMK is determined by the AWS service that creates and manages the CMK.

All DynamoDB tables are encrypted.

  • There is no option to enable or disable encryption for new or existing tables.
  • By default, all tables are encrypted under an AWS owned customer master key (CMK) in the DynamoDB service account.
  • However, you can select an option to encrypt some or all of your tables under a customer-managed CMK or the AWS managed CMK for DynamoDB in your account.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A medium-sized business has a taxi dispatch application deployed on an EC2 instance. Because of an unknown bug, the application causes the instance to freeze regularly. Then, the instance has to be manually restarted via the AWS management console.

Which of the following is the MOST cost-optimal and resource-efficient way to implement an automated solution until a permanent fix is delivered by the development team?

A

Setup a CloudWatch alarm to monitor the health status of the instance. In case of an Instance Health Check failure, an EC2 Reboot CloudWatch Alarm Action can be used to reboot the instance

Using Amazon CloudWatch alarm actions, you can create alarms that automatically stop, terminate, reboot, or recover your EC2 instances.

  • You can use the stop or terminate actions to help you save money when you no longer need an instance to be running.
  • You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs.
  • You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance.

The reboot alarm action is recommended for Instance Health Check failures (as opposed to the recover alarm action, which is suited for System Health Check failures).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

As a Solutions Architect, you have been hired to work with the engineering team at a company to create a REST API using the serverless architecture.

Which of the following solutions will you recommend to move the company to the serverless architecture?

A

API Gateway exposing Lambda Functionality

  • Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
  • APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services.
  • AWS Lambda lets you run code without provisioning or managing servers.
  • You pay only for the compute time you consume.
  • API Gateway can expose Lambda functionality through RESTful APIs.

Both are serverless options offered by AWS and hence the right choice for this scenario, considering all the functionality they offer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

An online gaming company wants to block access to its application from specific countries; however, the company wants to allow its remote development team (from one of the blocked countries) to have access to the application. The application is deployed on EC2 instances running under an Application Load Balancer (ALB) with AWS WAF.

As a solutions architect, which of the following solutions can be combined to address the given use-case? (Select two)

A

Use WAF geo match statement listing the countries that you want to block

Use WAF IP set statement that specifies the IP addresses that you want to allow through

  • AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources.
  • AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns and rules that filter out specific traffic patterns you define.
  • You can deploy AWS WAF on Amazon CloudFront as part of your CDN solution, the Application Load Balancer that fronts your web servers or origin servers running on EC2, or Amazon API Gateway for your APIs.

To block specific countries, you can create a WAF geo match statement listing the countries that you want to block, and to allow traffic from IPs of the remote development team, you can create a WAF IP set statement that specifies the IP addresses that you want to allow through.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An e-commerce company uses Amazon SQS queues to decouple their application architecture. The engineering team has observed message processing failures for some customer orders.

As a solutions architect, which of the following solutions would you recommend for handling such message failures?

A

Use a dead-letter queue to handle message processing failures

  • Dead-letter queues can be used by other queues (source queues) as a target for messages that can’t be processed (consumed) successfully.
  • Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn’t succeed.
  • Sometimes, messages can’t be processed because of a variety of possible issues, such as when a user comments on a story but it remains unprocessed because the original story itself is deleted by the author while the comments were being posted.

In such a case, the dead-letter queue can be used to handle message processing failures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Your company is building a music sharing platform on which users can upload the songs of their choice. As a solutions architect for the platform, you have designed an architecture that will leverage a Network Load Balancer linked to an Auto Scaling Group across multiple availability zones. You are currently running with 100 Amazon EC2 instances with an Auto Scaling Group that needs to be able to share the storage layer for the music files.

Which technology do you recommend?

A
  • Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
  • It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Here, we need a network file system (NFS), which is exactly what EFS is designed for. So, EFS is the correct option.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A silicon valley based healthcare startup uses AWS Cloud for its IT infrastructure. The startup stores patient health records on Amazon S3. The engineering team needs to implement an archival solution based on Amazon S3 Glacier to enforce regulatory and compliance controls on data access.

As a solutions architect, which of the following solutions would you recommend?

A

Use S3 Glacier vault to store the sensitive archived data and then use a vault lock policy to enforce compliance controls

  • Amazon S3 Glacier is a secure, durable, and extremely low-cost Amazon S3 cloud storage class for data archiving and long-term backup.
  • It is designed to deliver 99.999999999% durability, and provide comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements.
  • An S3 Glacier vault is a container for storing archives.
  • When you create a vault, you specify a vault name and the AWS Region in which you want to create the vault.

S3 Glacier Vault Lock

  • Allows you to easily deploy and enforce compliance controls for individual S3 Glacier vaults with a vault lock policy.
  • You can specify controls such as “write once read many” (WORM) in a vault lock policy and lock the policy from future edits. Therefore, this is the correct option.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The engineering team at a weather tracking company wants to enhance the performance of its relation database and is looking for a caching solution that supports geospatial data.

As a solutions architect, which of the following solutions will you suggest?

A

Use Amazon ElastiCache for Redis

Amazon ElastiCache

  • Is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud.

Redis, which stands for Remote Dictionary Server,

  • Is a fast, open-source, in-memory key-value data store for use as a database, cache, message broker, and queue.
  • Redis now delivers sub-millisecond response times enabling millions of requests per second for real-time applications in Gaming, Ad-Tech, Financial Services, Healthcare, and IoT.
  • Redis is a popular choice for caching, session management, gaming, leaderboards, real-time analytics, geospatial, ride-hailing, chat/messaging, media streaming, and pub/sub apps.
  • All Redis data resides in the server’s main memory, in contrast to databases such as PostgreSQL, Cassandra, MongoDB and others that store most data on disk or on SSDs.
  • In comparison to traditional disk based databases where most operations require a roundtrip to disk, in-memory data stores such as Redis don’t suffer the same penalty.
  • They can therefore support an order of magnitude more operations and faster response times.

The result is – blazing fast performance with average read or write operations taking less than a millisecond and support for millions of operations per second.

Redis has purpose-built commands for working with real-time geospatial data at scale.

  • You can perform operations like finding the distance between two elements (for example people or places) and finding all elements within a given distance of a point.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You are a cloud architect at an IT company. The company has multiple enterprise customers that manage their own mobile apps that capture and send data to Amazon Kinesis Data Streams. They have been getting a ProvisionedThroughputExceededException exception. You have been contacted to help and upon analysis, you notice that messages are being sent one by one at a high rate.

Which of the following options will help with the exception while keeping costs at a minimum?

A

Use batch messages

  • Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service.
  • KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.
  • The data collected is available in milliseconds to enable real-time analytics use cases such as real-time dashboards, real-time anomaly detection, dynamic pricing, and more.

When a host needs to send many records per second (RPS) to Amazon Kinesis, simply calling the basic PutRecord API action in a loop is inadequate.

To reduce overhead and increase throughput, the application must batch records and implement parallel HTTP requests.

  • This will increase the efficiency overall and ensure you are optimally using the shards.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The infrastructure team at a company maintains 5 different VPCs (let’s call these VPCs A, B, C, D, E) for resource isolation. Due to the changed organizational structure, the team wants to interconnect all VPCs together. To facilitate this, the team has set up VPC peering connections between VPC A and all other VPCs in a hub and spoke model with VPC A at the center. However, the team has still failed to establish connectivity between all VPCs.

As a solutions architect, which of the following would you recommend as the MOST resource-efficient and scalable solution?

A

Use a transit gateway to interconnect the VPCs

A transit gateway is a network transit hub that you can use to interconnect your virtual private clouds (VPC) and on-premises networks.

  • A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses.
  • Transitive Peering does not work for VPC peering connections.
  • So, if you have a VPC peering connection between VPC A and VPC B (pcx-aaaabbbb), and between VPC A and VPC C (pcx-aaaacccc).
  • Then, there is no VPC peering connection between VPC B and VPC C.
  • Instead of using VPC peering, you can use an AWS Transit Gateway that acts as a network transit hub, to interconnect your VPCs or connect your VPCs with on-premises networks.
17
Q

While troubleshooting, a cloud architect realized that the Amazon EC2 instance is unable to connect to the internet using the Internet Gateway.

Which conditions should be met for internet connectivity to be established? (Select two)

A

The network ACLs associated with the subnet must have rules to allow inbound and outbound traffic

  • The network access control lists (ACLs) that are associated with the subnet must have rules to allow inbound and outbound traffic on port 80 (for HTTP traffic) and port 443 (for HTTPs traffic).
  • This is a necessary condition for Internet Gateway connectivity

The route table in the instance’s subnet should have a route to an Internet Gateway

  • A route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed.
  • The route table in the instance’s subnet should have a route defined to the Internet Gateway.
18
Q

You have just terminated an instance in the us-west-1a availability zone. The attached EBS volume is now available for attachment to other instances. An intern launches a new Linux EC2 instance in the us-west-1b availability zone and is attempting to attach the EBS volume. The intern informs you that it is not possible and needs your help.

Which of the following explanations would you provide to them?

A

EBS volumes are AZ locked

  • An Amazon EBS volume is a durable, block-level storage device that you can attach to your instances.
  • After you attach a volume to an instance, you can use it as you would use a physical hard drive.
  • EBS volumes are flexible.

For current-generation volumes attached to current-generation instance types, you can dynamically increase size, modify the provisioned IOPS capacity, and change volume type on live production volumes.

When you create an EBS volume, it is automatically replicated within its Availability Zone to prevent data loss due to the failure of any single hardware component.

  • You can attach an EBS volume to an EC2 instance in the same Availability Zone.
19
Q

A health-care company manages its web application on Amazon EC2 instances running behind Auto Scaling group (ASG). The company provides ambulances for critical patients and needs the application to be reliable. The workload of the company can be managed on 2 EC2 instances and can peak up to 6 instances when traffic increases.

As a Solutions Architect, which of the following configurations would you select as the best fit for these requirements?

A

The ASG should be configured with the minimum capacity set to 4, with 2 instances each in two different Availability Zones.

The maximum capacity of the ASG should be set to 6 -

  • You configure the size of your Auto Scaling group by setting the minimum, maximum, and desired capacity.
  • The minimum and maximum capacity are required to create an Auto Scaling group, while the desired capacity is optional.

If you do not define your desired capacity upfront, it defaults to your minimum capacity.

Amazon EC2 Auto Scaling

  • Enables you to take advantage of the safety and reliability of geographic redundancy by spanning Auto Scaling groups across multiple Availability Zones within a Region.
  • When one Availability Zone becomes unhealthy or unavailable, Auto Scaling launches new instances in an unaffected Availability Zone.
  • When the unhealthy Availability Zone returns to a healthy state, Auto Scaling automatically redistributes the application instances evenly across all of the designated Availability Zones.

Since the application is extremely critical and needs to have a reliable architecture to support it, the EC2 instances should be maintained in at least two Availability Zones (AZs) for uninterrupted service.

  • Amazon EC2 Auto Scaling attempts to distribute instances evenly between the Availability Zones that are enabled for your Auto Scaling group.
  • This is why the minimum capacity should be 4 instances and not 2.

ASG will launch 2 instances each in both the AZs and this redundancy is needed to keep the service available always.

20
Q

A media company is evaluating the possibility of moving its IT infrastructure to the AWS Cloud. The company needs at least 10 TB of storage with the maximum possible I/O performance for processing certain files which are mostly large videos. The company also needs close to 450 TB of very durable storage for storing media content and almost double of it, i.e. 900 TB for archival of legacy data.

As a Solutions Architect, which set of services will you recommend to meet these requirements?

A

Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage -

Instance store

  • An instance store provides temporary block-level storage for your instance.
  • This storage is located on disks that are physically attached to the host computer.
  • Is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
  • You can specify instance store volumes for an instance only when you launch it.
  • You can’t detach an instance store volume from one instance and attach it to a different instance.
  • Some instance types use NVMe or SATA-based solid-state drives (SSD) to deliver high random I/O performance.
  • This is a good option when you need storage with very low latency, but you don’t need the data to persist when the instance terminates or you can take advantage of fault-tolerant architectures.

S3 Standard

  • Offers high durability, availability, and performance object storage for frequently accessed data.
  • Because it delivers low latency and high throughput, S3 Standard is appropriate for a wide variety of use cases, including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics.

S3 Glacier

  • Is a secure, durable, and low-cost storage class for data archiving.
  • You can reliably store any amount of data at costs that are competitive with or cheaper than on-premises solutions.
  • To keep costs low yet suitable for varying needs, S3 Glacier provides three retrieval options that range from a few minutes to hours.
  • You can upload objects directly to S3 Glacier, or use S3 Lifecycle policies to transfer data between any of the S3 Storage Classes for active data (S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA) and S3 Glacier.
21
Q

Which of the following is true regarding cross-zone load balancing as seen in Application Load Balancer versus Network Load Balancer?

A

By default, cross-zone load balancing is enabled for Application Load Balancer and disabled for Network Load Balancer

By default, cross-zone load balancing is enabled for Application Load Balancer and disabled for Network Load Balancer. When cross-zone load balancing is enabled, each load balancer node distributes traffic across the registered targets in all the enabled Availability Zones. When cross-zone load balancing is disabled, each load balancer node distributes traffic only across the registered targets in its Availability Zone.

22
Q

A silicon valley based startup helps its users legally sign highly confidential contracts. To meet the compliance guidelines, the startup must ensure that the signed contracts are encrypted using the AES-256 algorithm via an encryption key that is generated internally. The startup is now migrating to AWS Cloud and would like you to advise them on the encryption scheme to adopt. The startup wants to continue using their existing encryption key generation mechanism.

What do you recommend?

A

SSE-C - With Server-Side Encryption with Customer-Provided Keys (SSE-C),

  • You manage the encryption keys and Amazon S3 manages the encryption, as it writes to disks, and decryption when you access your objects.
  • With SSE-C, the startup can still provide the encryption key but let AWS do the encryption.
23
Q

The data science team at a mobility company wants to analyze real-time location data of rides. The company is using Kinesis Data Firehose for delivering the location-specific streaming data into targets for downstream analytics.

Which of the following targets are NOT supported by Kinesis Data Firehose?

A

You can use Amazon Kinesis Data Firehose to load streaming data into data lakes, data stores, and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk.

Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto.

Amazon EMR uses Hadoop, an open-source framework, to distribute your data and processing across a resizable cluster of Amazon EC2 instances.

Firehose does not support Amazon EMR as a target for delivering the streaming data.

24
Q

Computer vision researchers at a university are trying to optimize the I/O bound processes for a proprietary algorithm running on EC2 instances. The ideal storage would facilitate high-performance IOPS when doing file processing in a temporary storage space before uploading the results back into Amazon S3.

As a solutions architect, which of the following AWS storage options would you recommend as the MOST performant as well as cost-optimal?

A

Use EC2 instances with Instance Store as the storage type

  • An instance store provides temporary block-level storage for your instance.
  • This storage is located on disks that are physically attached to the host computer.
  • Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
  • Some instance types use NVMe or SATA-based solid-state drives (SSD) to deliver high random I/O performance.
  • This is a good option when you need storage with very low latency, but you don’t need the data to persist when the instance terminates or you can take advantage of fault-tolerant architectures.

As Instance Store delivers high random I/O performance, it can act as a temporary storage space, and these volumes are included as part of the instance’s usage cost, therefore this is the correct option.

25
Q

A streaming solutions company is building a video streaming product by using an Application Load Balancer (ALB) that routes the requests to the underlying EC2 instances. The engineering team has noticed a peculiar pattern. The ALB removes an instance whenever it is detected as unhealthy but the Auto Scaling group fails to kick-in and provision the replacement instance.

What could explain this anomaly?

A

The Auto Scaling group is using EC2 based health check and the Application Load Balancer is using ALB based health check

  • An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management.
  • Application Load Balancer automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, and Lambda functions.
  • It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones.
  • If the Auto Scaling group (ASG) is using EC2 as the health check type and the Application Load Balancer (ALB) is using its in-built health check, there may be a situation where the ALB health check fails because the health check pings fail to receive a response from the instance.
  • At the same time, ASG health check can come back as successful because it is based on EC2 based health check.
  • Therefore, in this scenario, the ALB will remove the instance from its inventory, however, the ASG will fail to provide the replacement instance.
26
Q

A company wants to publish an event into an SQS queue whenever a new object is uploaded on S3.

Which of the following statements are true regarding this functionality?

A

Only Standard SQS queue is allowed as an Amazon S3 event notification destination, whereas FIFO SQS queue is not allowed

  • The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket.
  • To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications.

Amazon S3 supports the following destinations where it can publish events:

  • Amazon Simple Notification Service (Amazon SNS) topic
  • Amazon Simple Queue Service (Amazon SQS) queue
  • AWS Lambda

Currently, the Standard SQS queue is only allowed as an Amazon S3 event notification destination, whereas the FIFO SQS queue is not allowed.

27
Q

A CRM application is facing user experience issues with users reporting frequent sign-in requests from the application. The application is currently hosted on multiple EC2 instances behind an Application Load Balancer. The engineering team has identified the root cause as unhealthy servers causing session data to be lost. The team would like to implement a distributed in-memory cache-based session management solution.

As a solutions architect, which of the following solutions would you recommend?

A

Use Elasticache for distributed cache-based session management

  • Amazon ElastiCache can be used as a distributed in-memory cache for session management.
  • Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud.
  • Session stores can be set up using both Memcached or Redis for ElastiCache.

Amazon ElastiCache for Redis is a great choice for real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session store.

  • Amazon ElastiCache for Memcached is a Memcached-compatible in-memory key-value store service that can be used as a cache or a data store.
  • Session stores are easy to create with Amazon ElastiCache for Memcached.
28
Q

A company hires experienced specialists to analyze the customer service calls attended by its call center representatives. Now, the company wants to move to AWS Cloud and is looking at an automated solution to analyze customer service calls for sentiment analysis and security.

As a Solutions Architect, which of the following solutions would you recommend?

A

Use Amazon Transcribe to convert audio files to text and Amazon Athena to understand the underlying customer sentiments -

  • Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy to convert audio to text.
  • One key feature of the service is called speaker identification, which you can use to label each individual speaker when transcribing multi-speaker audio files.
  • You can specify Amazon Transcribe to identify 2–10 speakers in the audio clip.
  • Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.
  • Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

To leverage Athena, you can simply point to your data in Amazon S3, define the schema, and start querying using standard SQL. Most results are delivered within seconds.

29
Q

An Internet-of-Things (IoT) company is looking for a database solution on AWS Cloud that has Auto Scaling capabilities and is highly available. The database should be able to handle any changes in data attributes over time, in case the company updates the data feed from its IoT devices. The database must provide the capability to output a continuous stream with details of any changes to the underlying data.

As a Solutions Architect, which database will you recommend?

A

Amazon DynamoDB

  • Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale.
  • It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications.
  • DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.
  • DynamoDB is serverless with no servers to provision, patch, or manage and no software to install, maintain, or operate.

A DynamoDB stream is an ordered flow of information about changes to items in a DynamoDB table.

  • When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.
  • Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attributes of the items that were modified.
  • A stream record contains information about a data modification to a single item in a DynamoDB table.
  • You can configure the stream so that the stream records capture additional information, such as the “before” and “after” images of modified items.

DynamoDB is horizontally scalable, has a DynamoDB streams capability and is multi-AZ by default.

  • On top of it, we can adjust the RCU and WCU automatically using Auto Scaling. This is the right choice for current requirements.
30
Q

An IT company has built a custom data warehousing solution for a retail organization by using Amazon Redshift. As part of the cost optimizations, the company wants to move any historical data (any data older than a year) into S3, as the daily analytical reports consume data for just the last one year. However the analysts want to retain the ability to cross-reference this historical data along with the daily reports.

The company wants to develop a solution with the LEAST amount of effort and MINIMUM cost. As a solutions architect, which option would you recommend to facilitate this use-case?

A

Use Redshift Spectrum to create Redshift cluster tables pointing to the underlying historical data in S3. The analytics team can then query this historical data to cross-reference with the daily reports from Redshift

Amazon Redshift

  • Is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis.

Using Amazon Redshift Spectrum,

  • You can efficiently query and retrieve structured and semistructured data from files in Amazon S3 without having to load the data into Amazon Redshift tables.
  • Amazon Redshift Spectrum resides on dedicated Amazon Redshift servers that are independent of your cluster.
  • Redshift Spectrum pushes many compute-intensive tasks, such as predicate filtering and aggregation, down to the Redshift Spectrum layer.

Thus, Redshift Spectrum queries use much less of your cluster’s processing capacity than other queries.

31
Q

A DevOps engineer at an IT company was recently added to the admin group of the company’s AWS account. The AdministratorAccess managed policy is attached to this group.

Can you identify the AWS tasks that the DevOps engineer CANNOT perform even though he has full Administrator privileges (Select two)?

A

Configure an Amazon S3 bucket to enable MFA (Multi Factor Authentication) delete

Close the company’s AWS account

An IAM user with full administrator access can perform almost all AWS tasks except a few tasks designated only for the root account user.

Some of the AWS tasks that only a root account user can do are as follows:

  • change account name
  • root password
  • root email address
  • change AWS support plan
  • close AWS account
  • enable MFA on S3 bucket
  • delete, create Cloudfront key pair
  • register for GovCloud.

Even though the DevOps engineer is part of the admin group, he cannot configure an Amazon S3 bucket to enable MFA delete or close the company’s AWS account.

32
Q

A retail company wants to establish encrypted network connectivity between its on-premises data center and AWS Cloud. The company wants to get the solution up and running in the fastest possible time and it should also support encryption in transit.

As a solutions architect, which of the following solutions would you suggest to the company?

A

Use Site-to-Site VPN to establish encrypted network connectivity between the on-premises data center and AWS Cloud

  • AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC).
  • You can securely extend your data center or branch office network to the cloud with an AWS Site-to-Site VPN connection.
  • A VPC VPN Connection utilizes IPSec to establish encrypted network connectivity between your on-premises network and Amazon VPC over the Internet.
  • IPsec is a protocol suite for securing IP communications by authenticating and encrypting each IP packet in a data stream.
33
Q

A financial services firm uses a high-frequency trading system and wants to write the log files into Amazon S3. The system will also read these log files in parallel on a near real-time basis. The engineering team wants to address any data discrepancies that might arise when the trading system overwrites an existing log file and then tries to read that specific log file.

Which of the following options BEST describes the capabilities of Amazon S3 relevant to this scenario?

A

A process replaces an existing object and immediately tries to read it. Amazon S3 always returns the latest version of the object

  • Amazon S3 delivers strong read-after-write consistency automatically, without changes to performance or availability, without sacrificing regional isolation for applications, and at no additional cost.
  • After a successful write of a new object or an overwrite of an existing object, any subsequent read request immediately receives the latest version of the object.
  • S3 also provides strong consistency for list operations, so after a write, you can immediately perform a listing of the objects in a bucket with any changes reflected.
  • Strong read-after-write consistency helps when you need to immediately read an object after a write.
  • For example, strong read-after-write consistency when you often read and list immediately after writing objects.

To summarize, all S3 GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata, are strongly consistent.

  • What you write is what you will read, and the results of a LIST will be an accurate reflection of what’s in the bucket.
34
Q

An automobile company is running its flagship application on a fleet of EC2 instances behind an Auto Scaling Group (ASG). The ASG has been configured more than a year ago. A young developer has just joined the development team and wants to understand the best practices to manage and configure an ASG.

As a Solutions Architect, which of these would you identify as the key characteristics that the developer needs to understand regarding ASG configurations? (Select three)

A

Amazon EC2 Auto Scaling is a fully managed service designed to launch or terminate Amazon EC2 instances automatically to help ensure you have the correct number of Amazon EC2 instances available to handle the load for your application.

  • If you have an EC2 Auto Scaling group (ASG) with running instances and you choose to delete the ASG, the instances will be terminated and the ASG will be deleted This statement is correct.
  • EC2 Auto Scaling groups can span Availability Zones, but not AWS regions - EC2 Auto Scaling groups are regional constructs.
  • They can span Availability Zones, but not AWS regions.
  • Data is not automatically copied from existing instances to a new dynamically created instance - Data is not automatically copied from existing instances to new instances.
  • You can use lifecycle hooks to copy the data.
35
Q

A company’s cloud architect has set up a solution that uses Route 53 to configure the DNS records for the primary website with the domain pointing to the Application Load Balancer (ALB). The company wants a solution where users will be directed to a static error page, configured as a backup, in case of unavailability of the primary website.

Which configuration will meet the company’s requirements, while keeping the changes to a bare minimum?

A

Set up a Route 53 active-passive failover configuration. If Route 53 health check determines the ALB endpoint as unhealthy, the traffic will be diverted to a static error page, hosted on Amazon S3 bucket

  • Use an active-passive failover configuration when you want a primary resource or group of resources to be available the majority of the time and you want a secondary resource or group of resources to be on standby in case all the primary resources become unavailable.
  • When responding to queries, Route 53 includes only healthy primary resources.
  • If all the primary resources are unhealthy, Route 53 begins to include only the healthy secondary resources in response to DNS queries.
36
Q

Your firm has implemented a multi-tiered networking structure within the VPC - with two public and two private subnets. The public subnets are used to deploy the Application Load Balancers, while the two private subnets are used to deploy the application on Amazon EC2 instances. The development team wants the EC2 instances to have access to the internet. The solution has to be fully managed by AWS and needs to work over IPv4.

What will you recommend?

A

NAT Gateways deployed in your public subnet - You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances.

A NAT gateway has the following characteristics and limitations:

  • A NAT gateway supports 5 Gbps of bandwidth and automatically scales up to 45 Gbps.
  • You can associate exactly one Elastic IP address with a NAT gateway.
  • A NAT gateway supports the following protocols: TCP, UDP, and ICMP.
  • You cannot associate a security group with a NAT gateway.
  • You can use a network ACL to control the traffic to and from the subnet in which the NAT gateway is located.
  • A NAT gateway can support up to 55,000 simultaneous connections to each unique destination.
  • Therefore you must use a NAT Gateway in your public subnet in order to provide internet access to your instances in your private subnets.
  • You are charged for creating and using a NAT gateway in your account.
  • NAT gateway hourly usage and data processing rates apply.
37
Q

An application with global users across AWS Regions had suffered an issue when the Elastic Load Balancer (ELB) in a Region malfunctioned thereby taking down the traffic with it. The manual intervention cost the company significant time and resulted in major revenue loss.

What should a solutions architect recommend to reduce internet latency and add automatic failover across AWS Regions?

A

Set up AWS Global Accelerator and add endpoints to cater to users in different geographic locations

  • As your application architecture grows, so does the complexity, with longer user-facing IP lists and more nuanced traffic routing logic.
  • AWS Global Accelerator solves this by providing you with two static IPs that are anycast from our globally distributed edge locations, giving you a single entry point to your application, regardless of how many AWS Regions it’s deployed in.
  • This allows you to add or remove origins, Availability Zones or Regions without reducing your application availability.
  • Your traffic routing is managed manually, or in console with endpoint traffic dials and weights.
  • If your application endpoint has a failure or availability issue, AWS Global Accelerator will automatically redirect your new connections to a healthy endpoint within seconds.

By using AWS Global Accelerator, you can:

  • Associate the static IP addresses provided by AWS Global Accelerator to regional AWS resources or endpoints, such as Network Load Balancers, Application Load Balancers, EC2 Instances, and Elastic IP addresses.
  • The IP addresses are anycast from AWS edge locations so they provide onboarding to the AWS global network close to your users.
  • Easily move endpoints between Availability Zones or AWS Regions without needing to update your DNS configuration or change client-facing applications.
  • Dial traffic up or down for a specific AWS Region by configuring a traffic dial percentage for your endpoint groups. This is especially useful for testing performance and releasing updates.
  • Control the proportion of traffic directed to each endpoint within an endpoint group by assigning weights across the endpoints.