saa-c02-part-17 Flashcards

1
Q

A company wants to build an online marketplace application on AWS as a set of loosely coupled microservices. For this application, when a customer submits a new order, two microservices should handle the event simultaneously. The Email microservice will send a confirmation email, and the OrderProcessing microservice will start the order delivery process. If a customer cancels an order, the OrderCancelation and Email microservices should handle the event simultaneously.

A solutions architect wants to use Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification Service (Amazon SNS) to design the messaging between the microservices.

How should the solutions architect design the solution?

  1. Create a single SQS queue and publish order events to it. The Email OrderProcessing and Order Cancellation microservices can then consume messages of the queue.
  2. Create three SNS topics for each microservice. Publish order events to the three topics. Subscribe each of the Email OrderProcessing and Order Cancellation microservices to its own topic.
  3. Create an SNS topic and publish order events to it. Create three SQS queues for the Email OrderProcessing and Order Cancellation microservices. Subscribe all SQS queues to the SNS topic with message filtering.
  4. Create two SQS queues and publish order events to both queues simultaneously. One queue is for the Email and OrderProcessing microservices. The second queue is for the Email and Order Cancellation microservices.
A
  1. Create an SNS topic and publish order events to it. Create three SQS queues for the Email OrderProcessing and Order Cancellation microservices. Subscribe all SQS queues to the SNS topic with message filtering.

Create three SNS topics for each microservice = not 2

single SQS queue = not 1

no SNS = not 4

Subscribe all SQS queues to the SNS topic = queues subscribe to topics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company is running a multi-tier ecommerce web application in the AWS Cloud. The application runs on Amazon EC2 Instances with an Amazon RDS MySQL Multi-AZ DB instance. Amazon RDS is configured with the latest generation instance with 2,000 GB of storage in an Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2) volume. The database performance impacts the application during periods of high demand.

After analyzing the logs in Amazon CloudWatch Logs, a database administrator finds that the application performance always degrades when the number of read and write IOPS is higher than 6.000.

What should a solutions architect do to improve the application performance?

  1. Replace the volume with a Magnetic volume.
  2. Increase the number of IOPS on the gp2 volume.
  3. Replace the volume with a Provisioned IOPS (PIOPS) volume.
  4. Replace the 2,000 GB gp2 volume with two 1,000 GBgp2 volumes.
A
  1. Replace the volume with a Provisioned IOPS (PIOPS) volume.

improve the application performance = PIOPS

https://aws.amazon.com/ebs/features/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company has an application that uses Amazon Elastic File System (Amazon EFS) to store data. The files are 1 GB in size or larger and are accessed often only for the first few days after creation. The application data is shared across a cluster of Linux servers. The company wants to reduce storage costs tor the application.

What should a solutions architect do to meet these requirements?

  1. Implement Amazon FSx and mount the network drive on each server.
  2. Move the files from Amazon Elastic File System (Amazon EFS) and store them locally on each Amazon EC2 instance.
  3. Configure a Lifecycle policy to move the files to the EFS Infrequent Access (IA) storage class after 7 days.
  4. Move the files to Amazon S3 with S3 lifecycle policies enabled. Rewrite the application to support mounting the S3 bucket.
A
  1. Configure a Lifecycle policy to move the files to the EFS Infrequent Access (IA) storage class after 7 days.

only for the first few days + reduce storage costs = Lifecycle policy

https://docs.aws.amazon.com/efs/latest/ug/lifecycle-management-efs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company has a service that produces event data. The company wants to use AWS to process the event data as it is received. The data is written in a specific order that must be maintained throughout processing. The company wants to implement a solution that minimizes operational overhead.

How should a solution architect accomplish this?

  1. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages. Set up an AWS Lambda function to process messages from the queue.
  2. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to process. Configure an AWS Lambda function as a subscriber.
  3. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to hold messages. Set up an AWS Lambda function to process messages from the queue independently.
  4. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to process. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a subscriber.
A
  1. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages. Set up an AWS Lambda function to process messages from the queue.

specific order = FIFO

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company needs guaranteed Amazon EC2 capacity in three specific Availability Zones in a specific AWS Region for an upcoming event that will last 1 week. What should the company do to guarantee the EC2 capacity?

  1. Purchase Reserved Instances that specify the Region needed.
  2. Create an On-Demand Capacity Reservation that specifies the Region needed.
  3. Purchase Reserved Instances that specify the Region and three Availability Zones needed.
  4. Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed.
A
  1. Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed.

last 1 week = on demand capacity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company wants to migrate its web application to AWS. The legacy web application consists of a web tier, an application tier, and a MySQL database. The re-architected application must consist of technologies that do not require the administration team to manage instances or clusters.

Which combination of services should a solutions architect include in the overall architecture? (Choose two.)

  1. Amazon Aurora Serverless
  2. Amazon EC2 Spot Instances
  3. Amazon Elasticsearch Service (Amazon ES)
  4. Amazon RDS for MySQL
  5. AWS Fargate
A
  1. Amazon Aurora Serverless
  2. AWS Fargate

technologies that do not require the administration team to manage instances or clusters = serverless = 1,5

https://aws.amazon.com/rds/aurora/serverless/

https://aws.amazon.com/fargate/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc&fargate-blogs.sort-by=item.additionalFields.createdDate&fargate-blogs.sort-order=desc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

An ecommerce company is experiencing an increase in user traffic. The company’s store is deployed on Amazon EC2 instances as a two-tier two application consisting of a web tier and a separate database tier. As traffic increases, the company notices that the architecture is causing significant delays in sending timely marketing and order confirmation email to users. The company wants to reduce the time it spends resolving complex email delivery issues and minimize operational overhead.

What should a solutions architect do to meet these requirements?

  1. Create a separate application tier using EC2 instances dedicated to email processing.
  2. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES).
  3. Configure the web instance to send email through Amazon Simple Notification Service (Amazon SNS).
  4. Create a separate application tier using EC2 instances dedicated to email processing. Place the instances in an Auto Scaling group.
A
  1. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES).

sending timely marketing and order confirmation email = SES

https://aws.amazon.com/ses/ Use cases Transactional emails Send immediate, trigger-based communications from your application to customers, such as purchase confirmations or password resets. Marketing emails Promote your products and services such as special offers and newsletters, with customized content and email templates. Bulk email communication Send bulk communications, including notifications and announcements, to large communities, and track results using configuration sets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company recently started using Amazon Aurora as the data store for its global ecommerce application. When large reports are run, developers report that the ecommerce application is performing poorly. After reviewing metrics in Amazon CloudWatch, a solutions architect finds that the ReadIOPS and CPU Utilization metrics are spiking when monthly reports run.

What is the MOST cost-effective solution?

  1. Migrate the monthly reporting to Amazon Redshift.
  2. Migrate the monthly reporting to an Aurora Replica.
  3. Migrate the Aurora database to a larger instance class.
  4. Increase the Provisioned IOPS on the Aurora instance.
A
  1. Migrate the monthly reporting to an Aurora Replica.

reports + ReadIOPS = read replicas = 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company uses on-premises servers to host its applications. The company is running out of storage capacity. The applications use both block storage and NFS storage. The company needs a high-performing solution that supports local caching without re-architecting its existing applications.

Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)

  1. Mount Amazon S3 as a file system to the on-premises servers.
  2. Deploy an AWS Storage Gateway file gateway to replace NFS storage.
  3. Deploy AWS Snowball Edge to provision NFS mounts to on-premises servers.
  4. Deploy an AWS Storage Gateway volume gateway to replace the block storage.
  5. Deploy Amazon Elastic Fife System (Amazon EFS) volumes and mount them to on-premises servers.
A
  1. Deploy an AWS Storage Gateway file gateway to replace NFS storage.
  2. Deploy an AWS Storage Gateway volume gateway to replace the block storage.

on-premises = gateway = 2,4

https://aws.amazon.com/storagegateway/file/

File Gateway provides a seamless way to connect to the cloud in order to store application data files and backup images as durable objects in Amazon S3 cloud storage. File Gateway offers SMB or NFS-based access to data in Amazon S3 with local caching. It can be used for on-premises applications, and for Amazon EC2-based applications that need file protocol access to S3 object storage.

https://aws.amazon.com/storagegateway/volume/

Volume Gateway presents cloud-backed iSCSI block storage volumes to your on-premises applications. Volume Gateway stores and manages on-premises data in Amazon S3 on your behalf and operates in either cache mode or stored mode. In the cached Volume Gateway mode, your primary data is stored in Amazon S3, while retaining your frequently accessed data locally in the cache for low latency access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A solution architect needs to design a highly available application consisting of web, application, and database tiers. HTTPS content delivery should be as close to the edge as possible, with the least delivery time.

Which solution meets these requirements and is MOST secure?

  1. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in public subnets. Configure Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.
  2. Amazon EC2 instances in private subnets Configure. Configure a public Application Load Balancer with multiple redundant Amazon CloudFront to deliver HTTPS content using the EC2 instances as the origin.
  3. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in private subnets. Configure Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.
  4. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in public subnets. Configure Amazon CloudFront to deliver HTTPS content using the EC2 instances as the origin.
A
  1. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in private subnets. Configure Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.

Application Load Balancer = goes in public subnet = 3,4

Amazon EC2 instances in private subnets = 3

ALB is public origin

EC2 must be in private subnet

CloudFront to distribute

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company has a popular gaming platform running on AWS. The application is sensitive to latency because latency can impact the user experience and introduce unfair advantages to some players. The application is deployed in every AWS Region. It runs on Amazon EC2 instances that are part of Auto Scaling groups configured behind Application Load Balancers (ALBs). A solutions architect needs to implement a mechanism to monitor the health of the application and redirect traffic to healthy endpoints.

Which solution meets these requirements?

  1. Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on and attach it to a Regional endpoint in each Region. Add the ALB as the endpoint.
  2. Create an Amazon CloudFront distribution and specify the ALB as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
  3. Create an Amazon CloudFront distribution and specify Amazon S3 as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
  4. Configure an Amazon DynamoDB database to serve as the data store for the application. Create a DynamoDB Accelerator (DAX) cluster to act as the in-memory cache for DynamoDB hosting the application data.
A
  1. Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on and attach it to a Regional endpoint in each Region. Add the ALB as the endpoint.

monitor the health + redirect to endpoints = Global Accelerator

deployed in every AWS Region = Global

Growing and maintaining your online gaming community requires a smooth and competitive gaming experience.

https://aws.amazon.com/blogs/gametech/improving-the-player-experience-by-leveraging-aws-global-accelerator-and-amazon-gamelift-fleetiq/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company is designing an internet-facing web application. The application runs on Amazon EC2 for Linux-based instances that store sensitive user data in Amazon RDS MySQL Multi-AZ DB instances. The EC2 instances are in public subnets, and the RDS DB instances are in private subnets. The security team has mandated that the DB instances be secured against web-based attacks.

What should a solutions architect recommend?

  1. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Configure the EC2 instance iptables rules to drop suspicious web traffic. Create a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the individual EC2 instances.
  2. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Move DB instances to the same subnets that EC2 instances are located in. Create a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the individual EC2 instances.
  3. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats. Create a security group for the web application servers and a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the web application server security group.
  4. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats. Configure the Auto Scaling group to automatically create new DB instances under heavy traffic. Create a security group for the RDS DB instances. Configure the RDS security group to only allow port 3306 inbound.
A
  1. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats. Create a security group for the web application servers and a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the web application server security group.

1 is wrong; how do you “drop suspicious web traffic” using iptables? You have to specify IP Address

2 is wrong; “Move DB instances to the same subnets that EC2 instances are located in” means all are in public subnets and this violates “sensitive user data” storage principles

4 is wrong as “allow port 3306 inbound” is not specified from which source

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A development team stores its Amazon RDS MySQL DB instance user name and password credentials in a configuration file. The configuration file is stored as plaintext on the root device volume of the team’s Amazon EC2 instance. When the team’s application needs to reach the database, it reads the file and loads the credentials into the code. The team has modified the permissions of the configuration file so that only the application can read its content. A solution architect must design a more secure solution.

What should the solutions architect do to meet this requirement?

  1. Store the configuration file in Amazon S3. Grant the application access to read the configuration file.
  2. Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance.
  3. Enable SSL connections on the database instance. Alter the database user to require SSL when logging in.
  4. Move the configuration file to an EC2 instance store, and create an Amazon Machine Image (AMI) of the instance. Launch new instances from this AMI.
A

2/ Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance.

permissions needed = role in answer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company wants a storage option that enables its data science team to analyze its data on premises and in the AWS Cloud. The team needs to be able to run statistical analyses by using the data on premises and by using a fleet of Amazon EC2 instances across multiple Availability Zones.

What should a solutions architect do to meet these requirements?

  1. Use an AWS Storage Gateway tape gateway to copy the on-premises files into Amazon S3.
  2. Use an AWS Storage Gateway volume gateway to copy the on-premises files into Amazon S3.
  3. Use an AWS Storage Gateway file gateway to copy the on-premises files to Amazon Elastic Block Store (Amazon EBS).
  4. Attach an Amazon Elastic File System (Amazon EFS) file system to the on-premises servers. Copy the files to Amazon EFS.
A
  1. Attach an Amazon Elastic File System (Amazon EFS) file system to the on-premises servers. Copy the files to Amazon EFS.

fleet of Amazon EC2 instances = concurrent = EFS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A company wants to improve the availability and performance of its stateless UDP-based workload. The workload is deployed on Amazon EC2 instances in multiple AWS Regions.

What should a solutions architect recommend to accomplish this?

  1. Place the EC2 instances behind Network Load Balancers (NLBs) in each Region. Create an accelerator using AWS Global Accelerator. Use the NLBs as endpoints for the accelerator.
  2. Place the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create an accelerator using AWS Global Accelerator. Use the ALBs as endpoints for the accelerator.
  3. Place the EC2 instances behind Network Load Balancers (NLBs) in each Region. Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the NLBs.
  4. Place the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the ALBs.
A
  1. Place the EC2 instances behind Network Load Balancers (NLBs) in each Region. Create an accelerator using AWS Global Accelerator. Use the NLBs as endpoints for the accelerator.

stateless UDP-based workload = layer 4 + stateless = NLB = 1,2

no mention of content hosting = 1 Global Accelerator

https://www.redhat.com/en/topics/cloud-native-apps/stateful-vs-stateless

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company wants to use high performance computing (HPC) infrastructure on AWS for financial risk modeling. The company’s HPC workloads run on Linux. Each HPC workflow runs on hundreds of AmazonEC2 Spot Instances, is short-lived, and generates thousands of output files that are ultimately stored in persistent storage for analytics and long-term future use.

The company seeks a cloud storage solution that permits the copying of on premises data to long-term persistent storage to make data available for processing by all EC2 instances. The solution should also be a high performance file system that is integrated with persistent storage to read and write datasets and output files.

Which combination of AWS services meets these requirements?

  1. Amazon FSx for Lustre integrated with Amazon S3
  2. Amazon FSx for Windows File Server integrated with Amazon S3
  3. Amazon S3 Glacier integrated with Amazon Elastic Block Store (Amazon EBS)
  4. Amazon S3 bucket with a VPC endpoint integrated with an Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2) volume
A
  1. Amazon FSx for Lustre integrated with Amazon S3

persistent storage for analytics = S3

workloads run on Linux = Lustre

17
Q

A solutions architect must design a database solution for a high-traffic ecommerce web application. The database stores customer profiles and shopping cart information. The database must support a peak load of several million requests each second and deliver responses in milliseconds. The operational overhead form an aging and scaling the database must be minimized.

Which database solution should the solutions architect recommend?

  1. Amazon Aurora
  2. Amazon DynamoDB
  3. Amazon RDS
  4. Amazon Redshift
A
  1. Amazon DynamoDB

ecommerce = dynamodb

deliver responses in milliseconds = DynamoDB

With DynamoDB, you can create database tables that can store and retrieve any amount of data and serve any level of request traffic. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

https://aws.amazon.com/dynamodb/

“shopping carts, workflow engines”

18
Q

A company is working with an external vendor that requires write access to the company’s Amazon Simple Queue Service (Amazon SQS) queue. The vendor has its own AWS account.

What should a solutions architect do to implement least privilege access?

  1. Update the permission policy on the SQS queue to give write access to the vendor’s AWS account.
  2. Create an IAM user with write access to the SQS queue and share the credentials for the IAM user.
  3. Update AWS Resource Access Manager to provide write access to the SQS queue from the vendor’s AWS account.
  4. Create a cross-account role with access to all SQS queues and use the vendor’s AWS account in the trust document for the role.
A
  1. Update the permission policy on the SQS queue to give write access to the vendor’s AWS account.

update 1 policy for 1 account = least privilege = 1

RAM cannot apply cross account sharing: https://docs.aws.amazon.com/ram/latest/userguide/shareable.html = not 3

4 = too much access (all SQS queues) = not 4

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-overview-of-managing-access.html#sqs-managing-access-to-resource Attach a permission policy to a user in another AWS account – To grant user permissions to create an Amazon SQS queue, attach an Amazon SQS permissions policy to a user in another AWS account.

19
Q

A company is creating a three-tier web application consisting of a web server, an application server, and a database server. The application will track GPS coordinates of packages as they are being delivered. The application will update the database every 0-5 seconds.

The tracking will need to read a fast as possible for users to check the status of their packages. Only a few packages might be tracked on some days, whereas millions of package might be tracked on other days. Tracking will need to be searchable by tracking ID customer ID and order ID. Order than 1 month no longer read to be tracked.

What should a solution architect recommend to accomplish this with minimal cost of ownership?

  1. Use Amazon DynamoDB Enable Auto Scaling on the DynamoDB table. Schedule an automatic deletion script for items older than 1 month.
  2. Use Amazon DynamoDB with global secondary indexes. Enable Auto Scaling on the DynamoDB table and the global secondary indexes. Enable TTL on the DynamoDB table.
  3. Use an Amazon RDS On-Demand instance with Provisioned IOPS (PIOPS). Enable Amazon CloudWatch alarms to send notifications when PIOPS are exceeded. Increase and decrease PIOPS as needed.
  4. Use an Amazon RDS Reserved Instance with Provisioned IOPS (PIOPS). Enable Amazon CloudWatch alarms to send notification when PIOPS are exceeded. Increase and decrease PIOPS as needed.
A
  1. Use Amazon DynamoDB with global secondary indexes. Enable Auto Scaling on the DynamoDB table and the global secondary indexes. Enable TTL on the DynamoDB table

Order than 1 month no longer read to be tracked = 1,2

Order than 1 month = TTL

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/time-to-live-ttl-before-you-start.html

Global secondary indexes for tracking / read fast https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html

20
Q

A solutions architect is creating a data processing job that runs once daily and can take up to 2 hours to complete. If the job is interrupted, it has to restart from the beginning.

How should the solutions architect address this issue in the MOST cost-effective manner?

  1. Create a script that runs locally on an Amazon EC2 Reserved Instance that is triggered by a cron job.
  2. Create an AWS Lambda function triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event.
  3. Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event.
  4. Use an Amazon Elastic Container Service (Amazon ECS) task running on Amazon EC2 triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event.
A
  1. Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event.

MOST cost-effective = serverless = ECS+Fargate = 3

this is serverless vs EC2 question

1 is wrong; “EC2 Reserved Instance” not cost effective compared to serverless

2 is wrong; Lambda runs for 15 minutes max

4 is wrong; “running on Amazon EC2” not cost effective

https://aws.amazon.com/blogs/containers/theoretical-cost-optimization-by-amazon-ecs-launch-type-fargate-vs-ec2/