Domain 2: Design Resilient Architectures Flashcards

1
Q

You are working for a large financial institution and preparing for disaster recovery and upcoming DR drills. A key component in the DR plan will be the database instances and their data. An aggressive Recovery Time Objective (RTO) dictates that the database needs to be synchronously replicated. Which configuration can meet this requirement?

AWS Lambda triggers a CloudFormation template launch in another Region.

RDS read replicas

RDS Multi-Region

RDS Multi-AZ

A

RDS Multi-Region

RDS Multi-Region does not exist. RDS Multi-AZ provides failover capability and synchronous replication.

Selected
RDS Multi-AZ

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.

https://aws.amazon.com/rds/features/multi-az/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Currently, you are employed as a solutions architect for a large international shipping company. The company is undergoing an IT transformation and they want to create an immutable database, where they can track packages as they are sent around the world. They will need to track what boxes they go in, what trucks they are sent in, and what aircraft or sea containers they are shipped in. The database needs to be immutable and cryptographically verifiable, and they would like to leverage the AWS cloud to achieve this. What database technology would best suit this requirement?

Aurora

RDS

Neptune

Amazon Quantum Ledger Database (QLDB)

A

Neptune

This is a graph database and would not be immutable and cryptographically verifiable.

Selected
Amazon Quantum Ledger Database (QLDB)

This is an immutable and cryptographically verifiable database and would be the best solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Your company has performed a Disaster Recovery drill which failed to meet the Recovery Time Objective (RTO) desired by executive management. The failure was due in large part to the amount of time taken to restore proper functioning on the database side. You have given management a recommendation of implementing synchronous data replication for the RDS database to help meet the RTO. Which of these options can perform synchronous data replication in RDS?

Read replicas

AWS Database Migration Service

RDS Multi-AZ

DAX

A

Read replicas

For the MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server database engines, Amazon RDS creates a second DB instance using a snapshot of the source DB instance. It then uses the engines’ native asynchronous replication to update the read replica whenever there is a change to the source DB instance. The asynchronous replication will not be sufficient to meet an aggressive RTO. We are not given the exact RTO in the question, but it has been established that we need synchronous replication. https://aws.amazon.com/rds/features/read-replicas/ Read replicas do support Multi-AZ: https://aws.amazon.com/about-aws/whats-new/2018/01/amazon-rds-read-replicas-now-support-multi-az-deployments/ However, they still do not support synchronous replication.

Selected

RDS Multi-AZ

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB instance, Amazon RDS automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention. https://aws.amazon.com/rds/features/multi-az/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The AWS team in a large company is spending a lot of time monitoring EC2 instances and maintenance when the instances report health check failures. How can you most efficiently automate this monitoring and repair?

Create a Lambda function which can be triggered by a failed instance health check. Have the Lambda function deploy a CloudFormation template which can perform the creation of a new instance.

Create a cron job which monitors the instances periodically and starts a new instance if a health check has failed.

Create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance if a health check fails.

Create a Lambda function which can be triggered by a failed instance health check. Have the Lambda function destroy the instance and spin up a new instance.

A

Create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance if a health check fails.

You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance. The reboot alarm action is recommended for Instance Health Check failures (as opposed to the recover alarm action, which is suited for System Health Check failures). An instance reboot is equivalent to an operating system reboot. In most cases, it takes only a few minutes to reboot your instance. When you reboot an instance, it remains on the same physical host, so your instance keeps its public DNS name, private IP address, and any data on its instance store volumes. Rebooting an instance doesn’t start a new instance billing hour, unlike stopping and restarting your instance. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmActions.html

Create a Lambda function which can be triggered by a failed instance health check. Have the Lambda function destroy the instance and spin up a new instance.

Creating a Lambda function would be recreating functionality already present in CloudWatch.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

An application team has decided to leverage AWS for their application infrastructure. The application performs proprietary, internal processes that other business applications utilize for their daily workloads. It is built with Apache Kafka to handle real-time streaming, which virtual machines running the application in docker containers consume the data from. The team wants to leverage services that provide less overhead but also cause the least amount of disruption to coding and deployments. Which combination of AWS services would best meet the requirements? (CHOOSE 2)

Amazon SNS

Amazon MQ

AWS Lambda

Amazon ECS Fargate

Amazon MSK

mazon Kinesis Data Streams

A

Amazon MQ

This service is meant to be used with RabbitMQ or ActiveMQ message broker systems.

Selected
AWS Lambda

Amazon ECS Fargate

Fargate containers offer the least disruptive changes, while also minimizing the operational overhead of managing the compute services. Reference: What is AWS Fargate?

Selected
Amazon MSK

This service is meant for applications that currently use or are going to use Apache Kafka for messaging. It allows for managing of control plane operations in AWS. Reference: Welcome to the Amazon MSK Developer Guide

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You work for an online retailer where any downtime at all can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The load balancer does health checks against an HTML file generated by script. When you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem?

The EC2 instance has failed the load balancer health check.

You are load testing at a moderate traffic level and not all instances are needed.

The instance has not been registered with CloudWatch.

The EC2 instance has failed EC2 status checks.

A

The EC2 instance has failed the load balancer health check.

The load balancer will route the incoming requests only to the healthy instances. The EC2 instance may have passed status checks and be considered healthy to the Auto Scaling group, but the ELB may not use it if the ELB health check has not been met. The ELB health check has a default of 30 seconds between checks, and a default of 3 checks before making a decision. Therefore, the instance could be visually available but unused for at least 90 seconds before the GUI would show it as failed. In CloudWatch, where the issue was noticed, it would appear to be a healthy EC2 instance but with no traffic, which is what was observed. https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-elb-healthcheck.html

You are load testing at a moderate traffic level and not all instances are needed.

The instance has not been registered with CloudWatch.

The EC2 instance has failed EC2 status checks.

What is one of the clues we got from this question? The Auto Scaling group is configured with default setting. The default health checks for an Auto Scaling group are EC2 status checks only. If an instance fails these status checks, the Auto Scaling group considers the instance unhealthy and replaces.

Selected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You have been hired as a Solutions Architect for a company that pairs photos with related story narratives in PDF format. The company needs to be able to store files in several different formats, such as PDF, JPG, PNG, Word, and several others. This storage needs to be highly durable. Which storage type will best meet this requirement?

DynamoDB

Amazon RDS

EC2 instance store

S3

A

S3

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9s) of durability, and stores data for millions of applications for companies all around the world.

https://aws.amazon.com/s3/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Jennifer is a cloud engineer for her application team. The application leverages several third-party SaaS vendors to complete their workflows within the application. Currently, the team uses numerous AWS Lambda functions for each SaaS vendor that run daily to connect to the configured vendor. The functions initiate a transfer of data files, ranging from one megabyte up to 80 gibibytes in size. These data files are stored in an Amazon S3 bucket and then referenced by the application itself. The data transfer routinely fails due to execution timeout limits in the Lambda functions, and the team wants to find a simpler and less error-prone way of transferring the required data. Which solution or AWS service could be the best fit for their solution?

Amazon AppFlow

Amazon EKS with Auto Scaling

Increase the Lambda function timeouts to one hour

Amazon EC2 Auto Scaling Groups

A

Amazon AppFlow

AppFlow offers a fully managed service for easily automating the exchange of data between SaaS vendors and AWS services like Amazon S3. You can transfer up to 100 gibibytes per flow, and this avoids the Lambda function timeouts. Reference: What is Amazon AppFlow? Tutorial: Transfer data between applications with Amazon AppFlow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A new startup is considering the advantages of using Amazon DynamoDB versus a traditional relational database in AWS RDS. The NoSQL nature of DynamoDB presents a small learning curve to the team members who all have experience with traditional databases. The company will have multiple databases, and the decision will be made on a case-by-case basis. Which of the following use cases would favor Amazon DynamoDB? CHOOSE 3

High-performance reads and writes for online transaction processing (OLTP) workloads

Managing web session data

Storing metadata for S3 objects

Strong referential integrity between tables

Online analytical processing (OLAP)/data warehouse implementations

Storing binary large object (BLOB) data

A

High-performance reads and writes for online transaction processing (OLTP) workloads

High-performance reads and writes are easy to manage with Amazon DynamoDB, and you can expect performance that is effectively constant across widely varying loads. How to determine if Amazon DynamoDB is appropriate for your needs, and then plan your migration.

Selected
Managing web session data

Amazon DynamoDB is a NoSQL database that supports key-value and document data structures. A key-value store is a database service that provides support for storing, querying, and updating collections of objects that are identified using a key and values that contain the actual content being stored. Meanwhile, a document data store provides support for storing, querying, and updating items in a document format such as JSON, XML, and HTML. Amazon DynamoDB’s fast and predictable performance characteristics make it a great match for handling session data. Plus, since it’s a fully-managed NoSQL database service, you avoid all the work of maintaining and operating a separate session store. Amazon DynamoDB Session Manager for Apache Tomcat.

Selected
Storing metadata for S3 objects

Storing metadata for Amazon S3 objects is correct because the Amazon DynamoDB stores structured data indexed by primary key and allows low-latency read and write access to items ranging from 1 byte up to 400KB. Amazon S3 stores unstructured blobs and is suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You have taken over management of several instances in the company AWS environment. You want to quickly review scripts used to bootstrap the instances at runtime. A URL command can be used to do this. What can you append to the URL http://169.254.169.254/latest/ to retrieve this data?

meta-data/

instance-demographic-data/

user-data/

instance-data/

A

user-data/

When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-add-user-data.html https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-instance-metadata.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Due to strict compliance requirements, your company cannot leverage AWS cloud for hosting their Kubernetes clusters, nor for managing the clusters. However, they do want to try to follow the established best practices and processes that the Amazon EKS service has implemented. How can your company achieve this while running entirely on-premises?

Run Amazon EKS.

This cannot be done.

Run Amazon ECS anywhere.

Run the clusters on-premises using Amazon EKS Distro

A

Run the clusters on-premises using Amazon EKS Distro.

Amazon EKS is based on the EKS Distro, which allows you to leverage the best practices and established processes on-premises that Amazon EKS uses in AWS. Reference: Amazon EKS Distro

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

An online retailer currently runs their application within AWS. Currently, everything is running on Amazon EC2 instances, including all application software. The application is well-written and completes order processes following a specific workflow logic order. The online retailer has begun to explore shifting their entire code base to AWS Lambda for each compute-based portion of the workflow, but they are not sure how best to interconnect the functions. There are three major requirements that need to be met. The first is that they need to implement a 20-minute wait period between certain functions in the application code and process. The second is they want to be able to conditionally handle a few different known scenarios that may occur during the order processing. The last requirement is to have an auditable workflow history. Which AWS service is the best fit for their workflow orchestration needs that has the least operational overhead and is the most cost-efficient?

Amazon EKS with Amazon RDS

AWS Lambda with Amazon S3

AWS Lambda with Amazon SNS

AWS Step Functions with AWS Lambda

A

AWS Step Functions with AWS Lambda

Use AWS Step Functions to orchestrate Lambda functions for the computation. This service allows you to implement long-running workflows with wait periods and conditional catches. Reference: What is AWS Step Functions? Wait Choice

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A financial institution has an application that produces huge amounts of actuary data, which is ultimately expected to be in the terabyte range. There is a need to run complex analytic queries against terabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Which service will best meet this requirement?

RDS

Redshift

Elasticache

DynamoDB

A

Redshift

Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It enables you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale out to petabytes of data for $1,000 per terabyte per year, less than a tenth of the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You have a typical architecture for an Application Load Balancer fronting an Auto Scaling group of EC2 instances, backed by an RDS MySQL database. Your Application Load Balancer is performing health checks on the EC2 instances. What actions will be taken if an instance fails these health checks?

The ALB notifies the Auto Scaling group that the instance is down.

The instance is terminated by the ALB.

The ALB stops sending traffic to the instance.

The instance is replaced by the ALB.

A

The ALB stops sending traffic to the instance.

The load balancer routes requests only to the healthy instances. When the load balancer determines that an instance is unhealthy, it stops routing requests to that instance. The load balancer resumes routing requests to the instance when it has been restored to a healthy state. https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You are designing a new application for a social media gaming company and they want to be able to use people’s facebook account to login and to sign up to the new app. They’ve asked that you use Cognito to achieve this. However, management wants to know the steps involved for user authentication. Which of the steps below is the correct order to authenticate with Cognito?

Step 1 - Authenticate and get tokens. Step 2 - Exchange tokens and get AWS credentials. Step 3 - Access AWS services using credentials.

Step 1 - Exchange tokens and get AWS credentials. Step 2 - Access AWS services using credentials. Step 3 - Authenticate and get tokens.

Step 1 - Exchange tokens and get AWS credentials. Step 2 - Authenticate and get tokens. Step 3 - Access AWS services using credentials.

Step 1 - Access AWS services using credentials. Step 2 - Exchange tokens and get AWS credentials. Step 3 - Authenticate and get tokens.

A

Step 1 - Authenticate and get tokens. Step 2 - Exchange tokens and get AWS credentials. Step 3 - Access AWS services using credentials.

This is how the authentication process works.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Your company has decided to migrate a SQL Server database to a newly-created AWS account. Which service can be used to migrate the database?

Database Migration Service

DynamoDB

Elasticache

AWS RDS

A

Database Migration Service

AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases.

AWS Database Migration Service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft SQL Server to Amazon Aurora. With AWS Database Migration Service, you can continuously replicate your data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3. Learn more about the supported source and target databases.

https://aws.amazon.com/dms/

17
Q

A financial tech company has decided to begin migrating their applications to the AWS cloud. Currently, they host their entire application using several self-managed Kubernetes clusters. One of their major concerns during this migration is monitoring and collecting system metrics due to the very large-scale deployments that are in place. Your Chief Technology Officer wants to avoid using AWS-proprietary technology-based monitoring services and instead leverage existing, well-known, open-source applications to help meet the monitoring requirements. Which combination of the following AWS services would best fit the company requirements while minimizing operational overhead? CHOOSE 2

AWS Config

AWS Managed Service for Prometheus

Prometheus on Auto Scaling EC2 Instances

AWS Managed Grafana

A

AWS Managed Service for Prometheus

Prometheus offers open-source monitoring. Amazon Managed Service for Prometheus is a serverless, Prometheus-compatible monitoring service for container metrics. It is perfect for monitoring Kubernetes clusters at scale. Reference: What is Amazon Managed Prometheus

Selected

AWS Managed Grafana

Grafana is a well-known open-source analytics and monitoring application. Amazon Managed Grafana offers a fully managed service for infrastructure for data visualizations. You can leverage this service to query, correlate, and visualize operational metrics from multiple sources. Reference: What is Amazon Managed Grafana