saa-c02-part-18 Flashcards

1
Q

A company needs to store data in Amazon S3. A compliance requirement states that when any changes are made to objects the previous state of the object with any changes must be preserved. Additionally, files older than 5 years should not be accessed but need to be archived for auditing.

What should a solutions architect recommend that is MOST cost-effective?

  1. Enable object-level versioning and S3 Object Lock in governance mode
  2. Enable object-level versioning and S3 Object Lock in compliance mode
  3. Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Glacier Deep Archive
  4. Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Standard-Infrequent Access (S3 Standard-IA)
A
  1. Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Glacier Deep Archive

files older than 5 years should not be accessed but need to be archived = lifecycle + glacier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A new employee has joined a company as a deployment engineer. The deployment engineer will be using AWS CloudFormation templates to create multiple AWS resources. A solutions architect wants the deployment engineer to perform job activities while following the principle of least privilege.

Which combination of actions should the solutions architect take to accomplish this goal? (Choose two.)

  1. Have the deployment engineer use AWS account root user credentials for performing AWS CloudFormation stack operations.
  2. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
  3. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached.
  4. Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
  5. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM role.
A

4, Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.

  1. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM role.
  2. AWS account roof user credentials = not least privilege
  3. IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only = looks good
  4. permissions specific to the AWS CloudFormation = also least privilege
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company is planning to use an Amazon DynamoDB table for data storage. The company is concerned about cost optimization. The table will not be used on most mornings in the evenings, the read and write traffic will often be unpredictable. When traffic spikes occur they will happen very quickly.

What should a solutions architect recommend?

  1. Create a DynamoDB table in on-demand capacity mode.
  2. Create a DynamoDB table with a global secondary Index.
  3. Create a DynamoDB table with provisioned capacity and auto scaling.
  4. Create a DynamoDB table in provisioned capacity mode, and configure it as a global table.
A
  1. Create a DynamoDB table in on-demand capacity mode.

traffic spikes occur they will happen very quickly = on-demand capacity = 1

traffic will often be unpredictable = on-demand capacity = not provisioned = not 3,4

https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A meteorological startup company has a custom web application to sell weather data to its users online. The company uses Amazon DynamoDB to store its data and wants to build a new service that sends an alert to the managers of four internal teams every time a new weather event is recorded. The company does not want this new service to affect the performance of the current application.

What should a solutions architect do to meet these requirements with the LEAST amount of operational overhead?

  1. Use DynamoDB transactions to write new event data to the table. Configure the transactions to notify internal teams.
  2. Have the current application publish a message to four Amazon Simple Notification Service (Amazon SNS) topics. Have each team subscribe to one topic.
  3. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.
  4. Add a custom attribute to each record to flag new items. Write a cron job that scans the table every minute for items that are new and notifies an Amazon Simple Queue Service (Amazon SQS) queue to which the teams can subscribe.
A
  1. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.

service that sends an alert = SNS = 2,3

2 = four Amazon Simple Notification Service (Amazon SNS) topics = not least overhead = not fanout

3 wins

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company is preparing to deploy a new serverless workload. A solutions architect needs to configure permissions for invoking an AWS Lambda function. The function will be triggered by an Amazon EventBridge (Amazon CloudWatch Events) rule. Permissions should be configured using the principle of least privilege.

Which solution will meet these requirements?

  1. Add an execution role to the function with lambda:InvokeFunction as the action and * as the principal.
  2. Add an execution role to the function with lambda:InvokeFunction as the action and Service:amazonaws.com as the principal.
  3. Add a resource-based policy to the function with lambda:’* as the action and Service:events.amazonaws.com as the principal.
  4. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal.
A
  1. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal.

* as the principal = not least privilege = not 1

Service:amazonaws.com as the principal = not least privilege = not 2

lambda:’* = all lambda permissions = not least privilege = not 3

4 wins

https://docs.aws.amazon.com/eventbridge/latest/userguide/resource-based-policies-eventbridge.html#lambda-permissions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company is building its web application using containers on AWS. The company requires three instances of the web application to run at all times. The application must be able to scale to meet increases in demand. Management is extremely sensitive to cost but agrees that the application should be highly available.

What should a solutions architect recommend?

  1. Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Fargate launch type. Create a task definition for the web application. Create an ECS service with a desired count of three tasks.
  2. Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Amazon EC2 launch type with three container instances in one Availability Zone. Create a task definition for the web application. Place one task for each container instance.
  3. Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Fargate launch type with one container instance in three different Availability Zones. Create a task definition for the web application. Create an ECS service with a desired count of three tasks.
  4. Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Amazon EC2 launch type with one container instance in two different Availability Zones. Create a task definition for the web application. Place two tasks on one container instance and one task on the remaining container instance.
A
  1. Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Fargate launch type. Create a task definition for the web application. Create an ECS service with a desired count of three tasks.

extremely sensitive to cost = try to stay serverless = 1,3

highly available = multi AZ = 1,3

Fargate ensures Availability Zone spread = 3 one container instance in three different Availability Zones is not necessary, Fargate takes care of HA for you.

https://aws.amazon.com/blogs/containers/amazon-ecs-availability-best-practices/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company is Re-architecting a strongly coupled application to be loosely coupled. Previously the application used a request/response pattern to communicate between tiers. The company plans to use Amazon Simple Queue Service (Amazon SQS) to achieve decoupling requirements. The initial design contains one queue for requests and one for responses. However, this approach is not processing all the messages as the application scales.

What should a solutions architect do to resolve this issue?

  1. Configure a dead-letter queue on the ReceiveMessage API action of the SQS queue.
  2. Configure a FIFO queue, and use the message deduplication ID and message group ID.
  3. Create a temporary queue, with the Temporary Queue Client to receive each response message.
  4. Create a queue for each request and response on startup for each producer, and use a correlation ID message attribute.
A
  1. Create a temporary queue, with the Temporary Queue Client to receive each response message.

request/response pattern = Temporary Queue Client

To better support short-lived, lightweight messaging destinations, we are pleased to present the Amazon SQS Temporary Queue Client. This client makes it easy to create and delete many temporary messaging destinations without inflating your AWS bill.” https://aws.amazon.com/blogs/compute/simple-two-way-messaging-using-the-amazon-sqs-temporary-queue-client/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company is launching an ecommerce website on AWS. This website is built with a three-tier architecture that includes a MySQL database in a Multi-AZ deployment of Amazon Aurora MySQL. The website application must be highly available and will initially be launched in an AWS Region with three Availability Zones The application produces a metric that describes the load the application experiences.

Which solution meets these requirements?

  1. Configure an Application Load Balancer (ALB) with Amazon EC2 Auto Scaling behind the ALB with scheduled scaling
  2. Configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a simple scaling policy.
  3. Configure a Network Load Balancer (NLB) and launch a Spot Fleet with Amazon EC2 Auto Scaling behind the NLB.
  4. Configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a target tracking scaling policy.
A
  1. Configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a target tracking scaling policy.

metric = target

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A solutions architect is creating a new Amazon CloudFront distribution for an application. Some of the information submitted by users is sensitive. The application uses HTTPS but needs another layer of security. The sensitive information should be protected throughout the entire application stack, and access to the information should be restricted to certain applications.

Which action should the solutions architect take?

  1. Configure a CloudFront signed URL
  2. Configure a CloudFront signed cookie.
  3. Configure a CloudFront field-level encryption profile.
  4. Configure a CloudFront and set the Origin Protocol Policy setting to HTTPS. Only for the Viewer Protocol Pokey.
A
  1. Configure a CloudFront field-level encryption profile.

protected throughout the entire application stack = field-level

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A solutions architect is redesigning a monolithic application to be a loosely coupled application composed of two microservices: Microservice A and Microservice B.

Microservice A places messages in a main Amazon Simple Queue Service (Amazon SQS) queue for Microservice B to consume. When Microservice B fails to process a message after four retries, the message needs to be removed from the queue and stored for further investigation.

What should the solutions architect do to meet these requirements?

  1. Create an SQS dead-letter queue. Microservice B adds failed messages to that queue after it receives and fails to process the message four times.
  2. Create an SQS dead-letter queue. Configure the main SQS queue to deliver messages to the dead-letter queue after the message has been received four times.
  3. Create an SQS queue for failed messages. Microservice A adds failed messages to that queue after Microservice B receives and fails to process the message four times.
  4. Create an SQS queue for failed messages. Configure the SQS queue for failed messages to pull messages from the main SQS queue after the original message has been received four times.
A
  1. Create an SQS dead-letter queue. Configure the main SQS queue to deliver messages to the dead-letter queue after the message has been received four times.

removed from the queue and stored = dead-letter = 1,2

When Microservice B fails to process a message after four retries = message does not get deleted from queue = requeued

https://aws.amazon.com/blogs/compute/using-amazon-sqs-dead-letter-queues-to-replay-messages/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company has NFS servers in an on-premises data center that need to periodically back up small amounts of data to Amazon S3. Which solution meets these requirements and is MOST cost-effective?

  1. Set up AWS Glue to copy the data from the on-premises servers to Amazon S3.
  2. Set up an AWS DataSync agent on the on-premises servers, and sync the data to Amazon S3.
  3. Set up an SFTP sync using AWS Transfer for SFTP to sync data from on-premises to Amazon S3.
  4. Set up an AWS Direct Connect connection between the on-premises data center and a VPC, and copy the data to Amazon S3.
A

on-premises = DataSync or DirectConnect = 2,4

DataSync = cheaper = 2

https://aws.amazon.com/blogs/storage/best-practices-for-setting-up-your-aws-datasync-agent/

https://aws.amazon.com/blogs/aws/new-aws-transfer-for-sftp-fully-managed-sftp-service-for-amazon-s3/

AWS DataSync fee for data copied $0.0125 per gigabyte (GB)

SFTP data uploads $0.04 per gigabyte (GB) transferred

SFTP data downloads $0.04 per gigabyte (GB) transferred

https://aws.amazon.com/datasync/pricing/

https://aws.amazon.com/aws-transfer-family/pricing/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company runs its production workload on an Amazon Aurora MySQL DB cluster that includes six Aurora Replicas. The company wants near-real-lime reporting queries from one of its departments to be automatically distributed across three of the Aurora Replicas. Those three replicas have a different compute and memory specification from the rest of the DB cluster.

Which solution meets these requirements?

  1. Create and use a custom endpoint for the workload.
  2. Create a three-node cluster clone and use the reader endpoint.
  3. Use any of the instance endpoints for the selected three nodes.
  4. Use the reader endpoint to automatically distribute the read-only workload.
A
  1. Create and use a custom endpoint for the workload.

DB cluster + automatically distributed + Replicas = custom endpoint

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.Endpoints.html#Aurora.Endpoints.Custom

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company has multiple applications that use Amazon RDS for MySQL as is database. The company recently discovered that a new custom reporting application has increased the number of Queries on the database. This is slowing down performance.

How should a solutions architect resolve this issue with the LEAST amount of application changes?

  1. Add a secondary DB instance using Multi-AZ.
  2. Set up a read replica and Multi-AZ on Amazon RDS.
  3. Set up a standby replica and Multi-AZ on Amazon RDS.
  4. Use caching on Amazon RDS to improve the overall performance.
A
  1. Set up a read replica and Multi-AZ on Amazon RDS.

number of Queries = read replica

LEAST amount of application changes = read replica

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company wants to automate the security assessment of its Amazon EC2 instances. The company needs to validate and demonstrate that security and compliance standards are being followed throughout the development process.

What should a solutions architect do to meet these requirements?

  1. Use Amazon Macie to automatically discover, classify and protect the EC2 instances.
  2. Use Amazon GuardDuty to publish Amazon Simple Notification Service (Amazon SNS) notifications.
  3. Use Amazon Inspector with Amazon CloudWatch to publish Amazon Simple Notification Service (Amazon SNS) notifications
  4. Use Amazon EventBridge (Amazon CloudWatch Events) to detect and react to changes in the status of AWS Trusted Advisor checks.
A
  1. Use Amazon Inspector with Amazon CloudWatch to publish Amazon Simple Notification Service (Amazon SNS) notifications

security assessment = Inspector

compliance = inspector

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A company stores 200 GB of data each month in Amazon S3. The company needs to perform analytics on this data at the end of each month to determine the number of items sold in each sales region for the previous month.

Which analytics strategy is MOST cost-effective for the company to use?

  1. Create an Amazon Elasticsearch Service (Amazon ES) cluster. Query the data in Amazon ES. Visualize the data by using Kibana.
  2. Create a table in the AWS Glue Data Catalog. Query the data in Amazon S3 by using Amazon Athena. Visualize the data in Amazon QuickSight.
  3. Create an Amazon EMR cluster. Query the data by using Amazon EMR, and store the results in Amazon S3. Visualize the data in Amazon QuickSight.
  4. Create an Amazon Redshift cluster. Query the data in Amazon Redshift, and upload the results to Amazon S3. Visualize the data in Amazon QuickSight.
A
  1. Create a table in the AWS Glue Data Catalog. Query the data in Amazon S3 by using Amazon Athena. Visualize the data in Amazon QuickSight.

Visualize the data in Amazon QuickSight = 2,3,4

4 = redshift expensive

3 = EMR cluster = ETL

2 = Glue = serverless data processing = cheap

2= Athena query S3 = serverless = cheap

2 wins

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company wants to move its on-premises network, attached storage (NAS) to AWS. The company wants to make the data available to any Linux instances within its VPC and ensure changes are automatically synchronized across all instances accessing the data store. The majority of the data is accessed very rarely, and some files are accessed by multiple users at the same time.

Which solution meets these requirements and is MOST cost-effective?

  1. Create an Amazon Elastic Block Store (Amazon EBS) snapshot containing the data. Share it with users within the VPC.
  2. Create an Amazon S3 bucket that has a lifecycle policy set to transition the data to S3 Standard-Infrequent Access (S3 Standard-IA) after the appropriate number of days.
  3. Create an Amazon Elastic File System (Amazon EFS) file system within the VPC. Set the throughput mode to Provisioned and to the required amount of IOPS to support concurrent usage.
  4. Create an Amazon Elastic File System (Amazon EFS) file system within the VPC. Set the lifecycle policy to transition the data to EFS Infrequent Access (EFS IA) after the appropriate number of days.
A
  1. Create an Amazon Elastic File System (Amazon EFS) file system within the VPC. Set the lifecycle policy to transition the data to EFS Infrequent Access (EFS IA) after the appropriate number of days.

data available to any Linux instances = concurrency = EFS = 3,4

majority of the data is accessed very rarely = IA = 4 wins

17
Q

A company plans to host a survey website on AWS. The company anticipates an unpredictable amount of traffic. This traffic results in asynchronous updates to the database. The company wants to ensure that writes to the database hosted on AWS do not get dropped.

How should the company write its application to handle these database requests?

  1. Configure the application to publish to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the database to the SNS topic.
  2. Configure the application to subscribe to an Amazon Simple Notification Service (Amazon SNS) topic. Publish the database updates to the SNS topic.
  3. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues to queue the database connection until the database has resources to write the data.
  4. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues for capturing the writes and draining the queue as each write is made to the database.
A
  1. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues for capturing the writes and draining the queue as each write is made to the database.

writes to the database hosted on AWS do not get dropped = SQS = 3,4

queue the database connection = wrong

4 wins

18
Q

A company that recently started using AWS establishes a Site-to-Site VPN between its on-premises datacenter and AWS. The company’s security mandate states that traffic originating from on premises should stay within the company’s private IP space when communicating with an Amazon Elastic Container Service (Amazon ECS) cluster that is hosting a sample web application.

Which solution meets this requirement?

  1. Configure a gateway endpoint for Amazon ECS. Modify the route table to include an entry pointing to the ECS cluster.
  2. Create a Network Load Balancer and AWS PrivateLink endpoint for Amazon ECS in the same VPC that is hosting the ECS cluster.
  3. Create a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC. Connect the two VPCs by using VPC peering.
  4. Configure an Amazon Route 53 record with Amazon ECS as the target. Apply a server certificate to Route 53 from AWS Certificate Manager (ACM) for SSL offloading.
A

2 .Create a Network Load Balancer and AWS PrivateLink endpoint for Amazon ECS in the same VPC that is hosting the ECS cluster.

stay within the company’s private IP space = PrivateLink endpoint in the same VPC

https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a316bf46-24db-4514-957d-abc60f8f6962/images/573951ed-74bb-4023-9d9c-43e77e4f8eda.png

19
Q

A solutions architect must analyze and update a company’s existing IAM policies prior to deploying a new workload. The solutions architect created the following policy:

SAA-C02 AWS Certified Solutions Architect – Associate SAA-C02 Part 18 Q19 023

What is the net effect of this policy?

  1. Users will be allowed all actions except s3:PutObject if multi-factor authentication (MFA) is enabled.
  2. Users will be allowed all actions except s3:PutObject if multi-factor authentication (MFA) is not enabled.
  3. Users will be denied all actions except s3:PutObject if multi-factor authentication (MFA) is enabled.
  4. Users will be denied all actions except s3:PutObject if multi-factor authentication (MFA) is not enabled.
A

4 Users will be denied all actions except s3:PutObject if multi-factor authentication (MFA) is not enabled.

Just match the pieces one at a time

20
Q

A company is running a multi-tier web application on premises. The web application is containerized and runs on a number of Linux hosts connected to a PostgreSQL database that contains user records. The operational overhead of maintaining the infrastructure and capacity planning is limiting the company’s growth. A solutions architect must improve the application’s infrastructure.

Which combination of actions should the solutions architect take to accomplish this? (Choose two.)

  1. Migrate the PostgreSQL database to Amazon Aurora.
  2. Migrate the web application to be hosted on Amazon EC2 instances.
  3. Set up an Amazon CloudFront distribution for the web application content.
  4. Set up Amazon ElastiCache between the web application and the PostgreSQL database.
  5. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).
A
  1. Migrate the PostgreSQL database to Amazon Aurora.
  2. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).

to be hosted on Amazon EC2 instances = switch from serverless to server based = not 2

not 3 = coudfront

Amazon ElastiCache = another db = not 4

1 and 5 wins