saa-c02-part-18 Flashcards
A company needs to store data in Amazon S3. A compliance requirement states that when any changes are made to objects the previous state of the object with any changes must be preserved. Additionally, files older than 5 years should not be accessed but need to be archived for auditing.
What should a solutions architect recommend that is MOST cost-effective?
- Enable object-level versioning and S3 Object Lock in governance mode
- Enable object-level versioning and S3 Object Lock in compliance mode
- Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Glacier Deep Archive
- Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Standard-Infrequent Access (S3 Standard-IA)
- Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Glacier Deep Archive
files older than 5 years should not be accessed but need to be archived = lifecycle + glacier
A new employee has joined a company as a deployment engineer. The deployment engineer will be using AWS CloudFormation templates to create multiple AWS resources. A solutions architect wants the deployment engineer to perform job activities while following the principle of least privilege.
Which combination of actions should the solutions architect take to accomplish this goal? (Choose two.)
- Have the deployment engineer use AWS account root user credentials for performing AWS CloudFormation stack operations.
- Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
- Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached.
- Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
- Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM role.
4, Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
- Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM role.
- AWS account roof user credentials = not least privilege
- IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only = looks good
- permissions specific to the AWS CloudFormation = also least privilege
A company is planning to use an Amazon DynamoDB table for data storage. The company is concerned about cost optimization. The table will not be used on most mornings in the evenings, the read and write traffic will often be unpredictable. When traffic spikes occur they will happen very quickly.
What should a solutions architect recommend?
- Create a DynamoDB table in on-demand capacity mode.
- Create a DynamoDB table with a global secondary Index.
- Create a DynamoDB table with provisioned capacity and auto scaling.
- Create a DynamoDB table in provisioned capacity mode, and configure it as a global table.
- Create a DynamoDB table in on-demand capacity mode.
traffic spikes occur they will happen very quickly = on-demand capacity = 1
traffic will often be unpredictable = on-demand capacity = not provisioned = not 3,4
https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/
A meteorological startup company has a custom web application to sell weather data to its users online. The company uses Amazon DynamoDB to store its data and wants to build a new service that sends an alert to the managers of four internal teams every time a new weather event is recorded. The company does not want this new service to affect the performance of the current application.
What should a solutions architect do to meet these requirements with the LEAST amount of operational overhead?
- Use DynamoDB transactions to write new event data to the table. Configure the transactions to notify internal teams.
- Have the current application publish a message to four Amazon Simple Notification Service (Amazon SNS) topics. Have each team subscribe to one topic.
- Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.
- Add a custom attribute to each record to flag new items. Write a cron job that scans the table every minute for items that are new and notifies an Amazon Simple Queue Service (Amazon SQS) queue to which the teams can subscribe.
- Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.
service that sends an alert = SNS = 2,3
2 = four Amazon Simple Notification Service (Amazon SNS) topics = not least overhead = not fanout
3 wins
A company is preparing to deploy a new serverless workload. A solutions architect needs to configure permissions for invoking an AWS Lambda function. The function will be triggered by an Amazon EventBridge (Amazon CloudWatch Events) rule. Permissions should be configured using the principle of least privilege.
Which solution will meet these requirements?
- Add an execution role to the function with lambda:InvokeFunction as the action and * as the principal.
- Add an execution role to the function with lambda:InvokeFunction as the action and Service:amazonaws.com as the principal.
- Add a resource-based policy to the function with lambda:’* as the action and Service:events.amazonaws.com as the principal.
- Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal.
- Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal.
* as the principal = not least privilege = not 1
Service:amazonaws.com as the principal = not least privilege = not 2
lambda:’* = all lambda permissions = not least privilege = not 3
4 wins
https://docs.aws.amazon.com/eventbridge/latest/userguide/resource-based-policies-eventbridge.html#lambda-permissions
A company is building its web application using containers on AWS. The company requires three instances of the web application to run at all times. The application must be able to scale to meet increases in demand. Management is extremely sensitive to cost but agrees that the application should be highly available.
What should a solutions architect recommend?
- Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Fargate launch type. Create a task definition for the web application. Create an ECS service with a desired count of three tasks.
- Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Amazon EC2 launch type with three container instances in one Availability Zone. Create a task definition for the web application. Place one task for each container instance.
- Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Fargate launch type with one container instance in three different Availability Zones. Create a task definition for the web application. Create an ECS service with a desired count of three tasks.
- Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Amazon EC2 launch type with one container instance in two different Availability Zones. Create a task definition for the web application. Place two tasks on one container instance and one task on the remaining container instance.
- Create an Amazon Elastic Container Service (Amazon ECS) cluster using the Fargate launch type. Create a task definition for the web application. Create an ECS service with a desired count of three tasks.
extremely sensitive to cost = try to stay serverless = 1,3
highly available = multi AZ = 1,3
Fargate ensures Availability Zone spread = 3 one container instance in three different Availability Zones is not necessary, Fargate takes care of HA for you.
https://aws.amazon.com/blogs/containers/amazon-ecs-availability-best-practices/
A company is Re-architecting a strongly coupled application to be loosely coupled. Previously the application used a request/response pattern to communicate between tiers. The company plans to use Amazon Simple Queue Service (Amazon SQS) to achieve decoupling requirements. The initial design contains one queue for requests and one for responses. However, this approach is not processing all the messages as the application scales.
What should a solutions architect do to resolve this issue?
- Configure a dead-letter queue on the ReceiveMessage API action of the SQS queue.
- Configure a FIFO queue, and use the message deduplication ID and message group ID.
- Create a temporary queue, with the Temporary Queue Client to receive each response message.
- Create a queue for each request and response on startup for each producer, and use a correlation ID message attribute.
- Create a temporary queue, with the Temporary Queue Client to receive each response message.
request/response pattern = Temporary Queue Client
To better support short-lived, lightweight messaging destinations, we are pleased to present the Amazon SQS Temporary Queue Client. This client makes it easy to create and delete many temporary messaging destinations without inflating your AWS bill.” https://aws.amazon.com/blogs/compute/simple-two-way-messaging-using-the-amazon-sqs-temporary-queue-client/
A company is launching an ecommerce website on AWS. This website is built with a three-tier architecture that includes a MySQL database in a Multi-AZ deployment of Amazon Aurora MySQL. The website application must be highly available and will initially be launched in an AWS Region with three Availability Zones The application produces a metric that describes the load the application experiences.
Which solution meets these requirements?
- Configure an Application Load Balancer (ALB) with Amazon EC2 Auto Scaling behind the ALB with scheduled scaling
- Configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a simple scaling policy.
- Configure a Network Load Balancer (NLB) and launch a Spot Fleet with Amazon EC2 Auto Scaling behind the NLB.
- Configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a target tracking scaling policy.
- Configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a target tracking scaling policy.
metric = target
A solutions architect is creating a new Amazon CloudFront distribution for an application. Some of the information submitted by users is sensitive. The application uses HTTPS but needs another layer of security. The sensitive information should be protected throughout the entire application stack, and access to the information should be restricted to certain applications.
Which action should the solutions architect take?
- Configure a CloudFront signed URL
- Configure a CloudFront signed cookie.
- Configure a CloudFront field-level encryption profile.
- Configure a CloudFront and set the Origin Protocol Policy setting to HTTPS. Only for the Viewer Protocol Pokey.
- Configure a CloudFront field-level encryption profile.
protected throughout the entire application stack = field-level
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html
A solutions architect is redesigning a monolithic application to be a loosely coupled application composed of two microservices: Microservice A and Microservice B.
Microservice A places messages in a main Amazon Simple Queue Service (Amazon SQS) queue for Microservice B to consume. When Microservice B fails to process a message after four retries, the message needs to be removed from the queue and stored for further investigation.
What should the solutions architect do to meet these requirements?
- Create an SQS dead-letter queue. Microservice B adds failed messages to that queue after it receives and fails to process the message four times.
- Create an SQS dead-letter queue. Configure the main SQS queue to deliver messages to the dead-letter queue after the message has been received four times.
- Create an SQS queue for failed messages. Microservice A adds failed messages to that queue after Microservice B receives and fails to process the message four times.
- Create an SQS queue for failed messages. Configure the SQS queue for failed messages to pull messages from the main SQS queue after the original message has been received four times.
- Create an SQS dead-letter queue. Configure the main SQS queue to deliver messages to the dead-letter queue after the message has been received four times.
removed from the queue and stored = dead-letter = 1,2
When Microservice B fails to process a message after four retries = message does not get deleted from queue = requeued
https://aws.amazon.com/blogs/compute/using-amazon-sqs-dead-letter-queues-to-replay-messages/
A company has NFS servers in an on-premises data center that need to periodically back up small amounts of data to Amazon S3. Which solution meets these requirements and is MOST cost-effective?
- Set up AWS Glue to copy the data from the on-premises servers to Amazon S3.
- Set up an AWS DataSync agent on the on-premises servers, and sync the data to Amazon S3.
- Set up an SFTP sync using AWS Transfer for SFTP to sync data from on-premises to Amazon S3.
- Set up an AWS Direct Connect connection between the on-premises data center and a VPC, and copy the data to Amazon S3.
on-premises = DataSync or DirectConnect = 2,4
DataSync = cheaper = 2
https://aws.amazon.com/blogs/storage/best-practices-for-setting-up-your-aws-datasync-agent/
https://aws.amazon.com/blogs/aws/new-aws-transfer-for-sftp-fully-managed-sftp-service-for-amazon-s3/
AWS DataSync fee for data copied $0.0125 per gigabyte (GB)
SFTP data uploads $0.04 per gigabyte (GB) transferred
SFTP data downloads $0.04 per gigabyte (GB) transferred
https://aws.amazon.com/datasync/pricing/
https://aws.amazon.com/aws-transfer-family/pricing/
A company runs its production workload on an Amazon Aurora MySQL DB cluster that includes six Aurora Replicas. The company wants near-real-lime reporting queries from one of its departments to be automatically distributed across three of the Aurora Replicas. Those three replicas have a different compute and memory specification from the rest of the DB cluster.
Which solution meets these requirements?
- Create and use a custom endpoint for the workload.
- Create a three-node cluster clone and use the reader endpoint.
- Use any of the instance endpoints for the selected three nodes.
- Use the reader endpoint to automatically distribute the read-only workload.
- Create and use a custom endpoint for the workload.
DB cluster + automatically distributed + Replicas = custom endpoint
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.Endpoints.html#Aurora.Endpoints.Custom
A company has multiple applications that use Amazon RDS for MySQL as is database. The company recently discovered that a new custom reporting application has increased the number of Queries on the database. This is slowing down performance.
How should a solutions architect resolve this issue with the LEAST amount of application changes?
- Add a secondary DB instance using Multi-AZ.
- Set up a read replica and Multi-AZ on Amazon RDS.
- Set up a standby replica and Multi-AZ on Amazon RDS.
- Use caching on Amazon RDS to improve the overall performance.
- Set up a read replica and Multi-AZ on Amazon RDS.
number of Queries = read replica
LEAST amount of application changes = read replica
A company wants to automate the security assessment of its Amazon EC2 instances. The company needs to validate and demonstrate that security and compliance standards are being followed throughout the development process.
What should a solutions architect do to meet these requirements?
- Use Amazon Macie to automatically discover, classify and protect the EC2 instances.
- Use Amazon GuardDuty to publish Amazon Simple Notification Service (Amazon SNS) notifications.
- Use Amazon Inspector with Amazon CloudWatch to publish Amazon Simple Notification Service (Amazon SNS) notifications
- Use Amazon EventBridge (Amazon CloudWatch Events) to detect and react to changes in the status of AWS Trusted Advisor checks.
- Use Amazon Inspector with Amazon CloudWatch to publish Amazon Simple Notification Service (Amazon SNS) notifications
security assessment = Inspector
compliance = inspector
A company stores 200 GB of data each month in Amazon S3. The company needs to perform analytics on this data at the end of each month to determine the number of items sold in each sales region for the previous month.
Which analytics strategy is MOST cost-effective for the company to use?
- Create an Amazon Elasticsearch Service (Amazon ES) cluster. Query the data in Amazon ES. Visualize the data by using Kibana.
- Create a table in the AWS Glue Data Catalog. Query the data in Amazon S3 by using Amazon Athena. Visualize the data in Amazon QuickSight.
- Create an Amazon EMR cluster. Query the data by using Amazon EMR, and store the results in Amazon S3. Visualize the data in Amazon QuickSight.
- Create an Amazon Redshift cluster. Query the data in Amazon Redshift, and upload the results to Amazon S3. Visualize the data in Amazon QuickSight.
- Create a table in the AWS Glue Data Catalog. Query the data in Amazon S3 by using Amazon Athena. Visualize the data in Amazon QuickSight.
Visualize the data in Amazon QuickSight = 2,3,4
4 = redshift expensive
3 = EMR cluster = ETL
2 = Glue = serverless data processing = cheap
2= Athena query S3 = serverless = cheap
2 wins