Domain 2 - Design High-Performing Architectures Flashcards

1
Q

How long can a Lamda function run?

A

It can only run for`15 minutes at a time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What service is used to trigger a scaling event when using Auto-Scaling?

A

CloudWatch. You can use CloudWatch to say “when CPU utilization (or many other definable metrics) reaches 80%, scale”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company has developed an app that processes photos and videos. When users upload photos and videos, a job processes the files. The job can take up to 1 hour to process long videos. The company is using on-demand EC2s to run web servers and processing jobs. The web layer and the processing layer have instances that run in an auto-scaling group behind an app load balancer. During peak hours, users report that the app is slow and that the app does not process some requests at all. During evening hours, the systems are idle. What should a solutions architect do so that the app will process all jobs in the MOST cost-effective manner?

A: use a larger instance size in the auto scaling groups of the web layer and the processing layer

B: Use spot instances for the auto scaling groups of the web layer and the processing layer

C: Use an SQS standard queue between the web and processing layers. Use a custom queue metric to scale the auto scaling group in the processing layer

D: Use AWS Lamda functions instead of EC2 instances and auto scaling groups. Increase the service quota so that sufficient concurrent functions can run at the same time.

A

A: Wrong. Works to solve the problem, but isn’t as cost effective as there is idle time already stated.
B:Wrong. It is more cost effective, however, spot instances and on demand instances are the same as far as stats go, so it doesn’t fix any problem
C: Correct. You can set it up so that once the queue starts to back up, it will scale. The queue will also prevent jobs being dropped during the transition
D:Wrong. Lamda can’t run for an hour as needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following has auto scaling already built into the service?

EC2
EBS
EFS
S3

A

EFS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

An SA has been given a large number of video files to upload to an S3 bucket. The file sizes are 100-500MB. The SA also wants to easily resume failed upload attempts. How should the SA perform the uploads in the LEAST amount of time?

A: Split each file into 5MB parts. Upload the individual parts normally and use S3 multipart upload to merge the parts into a complete object
B: Using the AWS CLI, copy individual objects into the S3 bucket with the aws s3 cp command.
C: From the S3 console, select the S3 bucket. Upload the S3 bucket and drag and drop the items into the bucket
D: Upload the files with SFTP and the AWS Transfer Family.

A

A: Wrong. You wouldn’t break up files into smaller parts as multipart is designed for 100MB+ file sizes. Waste of time.
B: Correct. The CLI cp command automatically uses multipart upload/download based on file size. And file sizes above 100MB should use multipart.
C: Wrong. Does not provide protection from network issues or automatically do multipart
D: Wrong. Couldn’t easily restart failed uploads, and in general, it takes longer to use these services for this

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A large international company has a management account in AWS Organizations, and over 50 individual accounts for each country they operate in. Each of the country accounts has at least four VPCs set up for functional divisions. There is a high amount of trust across the accounts, and communication among all of the VPCs should be allowed. Each of the individual VPCs throughout the entire global organization will need to access an account and VPC that provide shared services to all the other accounts. How can the member accounts access the shared services VPC with the LEAST operational overhead?

A: Create an Application Load Balancer, with a target of the private IP address of the shared services VPC. Add a Certification Authority Authorization (CAA) record for the ALB to Route 53. Point all requests for shared services in the VPCs routing tables that CAA record
B: Create a peering connection between each of the VPCs and the shared services VPC
C: Create a network load balancer across the AZs in the shared services VPC. Create service consumer roles in IAM, and set endpoint connection acceptance to automatically accept. Create consumer endpoints in each division VPC and point to the NLB
D: Create a VPN connection between each of the VPCs and the shared service VPC.

A

A: Wrong. The CAA record type doesn’t fit here
B: Wrong. VPC peering connections can only handle 125 connections, so it couldn’t handle all the requests.
C: Correct. Using an AWS PrivateLink connection is more appropriate than VPC Peering for this. You use PrivateLink when you have a client-server setup where you want to allow one or more consumer VPCs unidirectional access to a specific service in the service-provider VPC. This answer describes how to set up PrivateLink.
D: Wrong. You could do this, but it is a LOT of maintaining each VPN, so lots of overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company is building a distributed app, which will send sensor IoT data (including weather conditions and wind speed from wind turbines) to the cloud for further processing. As the nature of data is spiky, the app needs to be able to scale. it is important to store the streaming data in a key-value database and then send it over to a centralized data lake, where it can be transformed, analyzed, and combined with diverse organizational datasets to derive meaningful insights and make predictions. Which combination of solutions would accomplish the business need with minimal operational overhead? (Select TWO)

A: Configure Kinesis delivering streaming data to an S3 data lake
B: Use DocumentDB to store IoT sensor data
C: Write Lamda functions delivering streaming data to S3
D:Use DynamoDB to store the IoT sensor data and enable DynamoDB streams.
E:Use Kinesis delivering streaming data to Redshift and enable Redshift Spectrum

A

A: Correct. Kinesis can take the data from DynamoDB streams and get them to the data lake.
B: Wrong. DocumentDB is not a key-value database
C: Wrong. You could do this, but requires more overhead by using custom lamda code.
D: Correct. DynamoDB is a key-value database and can handle spiky data with auto-scaling. DynamoDB streams can be used to get the data ready to go to the data lake.
E: Wrong. S3 is a better choice than Redshift for the data lake because you can combine data from multiple sources more easily in S3, resulting is less overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the main solutions layers?

A

Compute
Storage
Database
Networking

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You are architecting a solution for a company to migrate an existing backend web service to AWS. The application is currently running on web services in an on-premises data center, and the company need to migrate the app with the least amount of rework to the code as possible. What type of solution would you recommend for this?
A: Use Lift and shift to EC2
B: Host with a container service
C: Host with Lamda

A

A: Correct. This is because you can select the OS on the EC2 instance that matches the origin OS, so very little reworking would be needed
B: Wrong
C: Wrong

How well did you know this?
1
Not at all
2
3
4
5
Perfectly