saa-c02-part-13 Flashcards
A company is experiencing growth as demand for its product has increased. The company’s existing purchasing application is slow when traffic spikes. The application is a monolithic three-tier application that uses synchronous transactions and sometimes sees bottlenecks in the application tier. A solutions architect needs to design a solution that can meet required application response times while accounting for traffic volume spikes.
Which solution will meet these requirements?
- Vertically scale the application instance using a larger Amazon EC2 instance size.
- Scale the application’s persistence layer horizontally by introducing Oracle RAC on AWS.
- Scale the web and application tiers horizontally using Auto Scaling groups and an Application Load Balancer.
- Decouple the application and data tiers using Amazon Simple Queue Service (Amazon SQS) with asynchronous AWS Lambda calls.
- Decouple the application and data tiers using Amazon Simple Queue Service (Amazon SQS) with asynchronous AWS Lambda calls.
synchronous transactions + bottlenecks in the application tier = decoupling needed
A company hosts an application used to upload files to an Amazon S3 bucket. Once uploaded, the files are processed to extract metadata, which takes less than 5 seconds. The volume and frequency of the uploads varies from a few files each hour to hundreds of concurrent uploads. The company has asked a solutions architect to design a cost-effective architecture that will meet these requirements.
What should the solutions architect recommend?
- Configure AWS CloudTrail trails to log S3 API calls. Use AWS AppSync to process the files.
- Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the files.
- Configure Amazon Kinesis Data Streams to process and send data to Amazon S3. Invoke an AWS Lambda function to process the files.
- Configure an Amazon Simple Notification Service (Amazon SNS) topic to process the files uploaded to Amazon S3. Invoke an AWS Lambda function to process the files.
- Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the files.
volume and frequency of the uploads varies = Lambda = 2 simplest
https://aws.amazon.com/premiumsupport/knowledge-center/lambda-configure-s3-event-notification/
A company has copied 1 PB of data from a colocation facility to an Amazon S3 bucket in the us-east-1 Region using an AWS Direct Connect link. The company now wants to copy the data to another S3 bucket in the us-west-2 Region. The colocation facility does not allow the use of AWS Snowball.
What should a solutions architect recommend to accomplish this?
- Order a Snowball Edge device to copy the data from one Region to another Region.
- Transfer contents from the source S3 bucket to a target S3 bucket using the S3 console.
- Use the aws S3 sync command to copy data from the source bucket to the destination bucket.
- Add a cross-Region replication configuration to copy objects across S3 buckets in different Regions.
- Add a cross-Region replication configuration to copy objects across S3 buckets in different Regions.
multiple regions = cross-Region replication
A company is using a fleet of Amazon EC2 instances to ingest data from on-premises data sources. The data is in JSON format and ingestion rates can be as high as 1 MB/s. When an EC2 instance is rebooted, the data in-flight is lost. The company’s data science team wants to query ingested data in near-real time.
Which solution provides near-real-time data querying that is scalable with minimal data loss?
- Publish data to Amazon Kinesis Data Streams. Use Kinesis Data Analytics to query the data.
- Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination. Use Amazon Redshift to query the data.
- Store ingested data in an EC2 instance store. Publish data to Amazon Kinesis Data Firehose with Amazon S3 as the destination. Use Amazon Athena to query the data.
- Store ingested data in an Amazon Elastic Block Store (Amazon EBS) volume. Publish data to Amazon ElastiCache for Redis. Subscribe to the Redis channel to query the data.
- Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination. Use Amazon Redshift to query the data.
near-real time = Kinesis = 1,2,3
query ingested data = db needed = 2,3
3 = EC2 instance store = data loss still there so 2 correct
A company is deploying a multi-instance application within AWS that requires minimal latency between the instances.
What should a solutions architect recommend?
- Use an Auto Scaling group with a cluster placement group.
- Use an Auto Scaling group with single Availability Zone in the same AWS Region.
- Use an Auto Scaling group with multiple Availability Zones in the same AWS Region.
- Use a Network Load Balancer with multiple Amazon EC2 Dedicated Hosts as the targets.
- Use an Auto Scaling group with a cluster placement group.
minimal latency between the instances = cluster placement group
A company is developing a mobile game that streams score updates to a backend processor and then posts results on a leaderboard. A solutions architect needs to design a solution that can handle large traffic spikes, process the mobile game updates in order of receipt, and store the processed updates in a highly available database. The company also wants to minimize the management overhead required to maintain the solution.
What should the solutions architect do to meet these requirements?
- Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams with AWS Lambda. Store the processed updates in Amazon DynamoDB.
- Push score updates to Amazon Kinesis Data Streams. Process the updates with a fleet of Amazon EC2 instances set up for Auto Scaling. Store the processed updates in Amazon Redshift.
- Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe an AWS Lambda function to the SNS topic to process the updates. Store the processed updates in a SQL database running on Amazon EC2.
- Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use a fleet of Amazon EC2 instances with Auto Scaling to process the updates in the SQS queue. Store the processed updates in an Amazon RDS Multi-AZ DB instance.
- Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams with AWS Lambda. Store the processed updates in Amazon DynamoDB.
minimize the management overhead = Serverless + simplest = 1
game = dynamodb
A company is building a document storage application on AWS. The application runs on Amazon EC2 instances in multiple Availability Zones. The company requires the document store to be highly available. The documents need to be returned immediately when requested. The lead engineer has configured the application to use Amazon Elastic Block Store (Amazon EBS) to store the documents, but is willing to consider other options to meet the availability requirement.
What should a solutions architect recommend?
- Snapshot the EBS volumes regularly and build new volumes using those snapshots in additional Availability Zones.
- Use Amazon Elastic Block Store (Amazon EBS) for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3.
- Use Amazon Elastic Block Store (Amazon EBS) for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3 Glacier.
- Use at least three Provisioned IOPS EBS volumes for EC2 instances. Mount the volumes to the EC2 instances in a RAID 5 configuration.
- Use Amazon Elastic Block Store (Amazon EBS) for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3.
highly available = meet the availability requirement = S3 = Objects are automatically stored across multiple devices spanning a minimum of three Availability Zones, each separated by miles across an AWS Region.
A group requires permissions to list an Amazon S3 bucket and delete objects from that bucket. An administrator has created the following IAM policy to provide access to the bucket and applied that policy to the group. The group is not able to delete objects in the bucket. The company follows least-privilege access rules.
Which statement should a solutions architect add to the policy to correct bucket access?
A : Has Access to all actions do with objects in the bucket
B : S3:* is having complete access to the bucket
C : Has access to only delete objects but not list
Look for /* after S3 resource name = D only one correct
https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-policies-s3.html
A solutions architect is designing a security solution for a company that wants to provide developers with individual AWS accounts through AWS Organizations, while also maintaining standard security controls. Because the individual developers will have AWS account root user-level access to their own accounts, the solutions architect wants to ensure that the mandatory AWS CloudTrail configuration that is applied to new developer accounts is not modified.
Which action meets these requirements?
- Create an IAM policy that prohibits changes to CloudTrail, and attach it to the root user.
- Create a new trail in CloudTrail from within the developer accounts with the organization trails option enabled.
- Create a service control policy (SCP) the prohibits changes to CloudTrail, and attach it the developer accounts.
- Create a service-linked role for CloudTrail with a policy condition that allows changes only from an Amazon Resource Name (ARN) in the master account.
- Create a service control policy (SCP) the prohibits changes to CloudTrail, and attach it the developer accounts.
individual AWS accounts through AWS Organizations = SCP
organization policies needed = SCP
Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay within your organization’s access control guidelines. https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
A company wants to share forensic accounting data that is stored in an Amazon RDS DB instance with an external auditor. The auditor has its own AWS account and requires its own copy of the database.
How should the company securely share the database with the auditor?
- Create a read replica of the database and configure IAM standard database authentication to grant the auditor access.
- Copy a snapshot of the database to Amazon S3 and assign an IAM role to the auditor to grant access to the object in that bucket.
- Export the database contents to text files, store the files in Amazon S3, and create a new IAM user for the auditor with access to that bucket.
- Make an encrypted snapshot of the database, share the snapshot, and allow access to the AWS Key Management Service (AWS KMS) encryption key.
- Make an encrypted snapshot of the database, share the snapshot, and allow access to the AWS Key Management Service (AWS KMS) encryption key.
external copy of the database = encryption needed
A company has an automobile sales website that stores its listings in a database on Amazon RDS. When an automobile is sold, the listing needs to be removed from the website and the data must be sent to multiple target systems.
Which design should a solutions architect recommend?
- Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) queue for the targets to consume.
- Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) FIFO queue for the targets to consume.
- Subscribe to an RDS event notification and send an Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon Simple Notification Service (Amazon SNS) topics. Use AWS Lambda functions to update the targets.
- Subscribe to an RDS event notification and send an Amazon Simple Notification Service (Amazon SNS) topic fanned out to multiple Amazon Simple Queue Service (Amazon SQS) queues. Use AWS Lambda functions to update the targets.
- Subscribe to an RDS event notification and send an Amazon Simple Notification Service (Amazon SNS) topic fanned out to multiple Amazon Simple Queue Service (Amazon SQS) queues. Use AWS Lambda functions to update the targets.
sent to multiple target systems = fanout needed = SNS = 3,4
fanout = multiple SQS queues = 4
A company is building a media sharing application and decides to use Amazon S3 for storage. When a media file is uploaded, the company starts a multi-step process to create thumbnails, identify objects in the images, transcode videos into standard formats and resolutions, and extract and store the metadata to an Amazon DynamoDB table. The metadata is used for searching and navigation.
The amount of traffic is variable. The solution must be able to scale to handle spikes in load without unnecessary expenses.
What should a solutions architect recommend to support this workload?
- Build the processing into the website or mobile app used to upload the content to Amazon S3. Save the required data to the DynamoDB table when the objects are uploaded.
- Trigger AWS Step Functions when an object is stored in the S3 bucket. Have the Step Functions perform the steps needed to process the object and then write the metadata to the DynamoDB table.
- Trigger an AWS Lambda function when an object is stored in the S3 bucket. Have the Lambda function start AWS Batch to perform the steps to process the object. Place the object data in the DynamoDB table when complete.
- Trigger an AWS Lambda function to store an initial entry in the DynamoDB table when an object is uploaded to Amazon S3. Use a program running on an Amazon EC2 instance in an Auto Scaling group to poll the index for unprocessed items, and use the program to perform the processing.
- Trigger AWS Step Functions when an object is stored in the S3 bucket. Have the Step Functions perform the steps needed to process the object and then write the metadata to the DynamoDB table.
multi-step process = AWS Step Functions
The individual steps in the workflow can invoke a Lambda function or a container that has some business logic, update a database such as DynamoDB
https://aws.amazon.com/step-functions/use-cases/
A company provides an API to its users that automates inquiries for tax computations based on item prices. The company experiences a larger number of inquiries during the holiday season only that cause slower response times. A solutions architect needs to design a solution that is scalable and elastic.
What should the solutions architect do to accomplish this?
- Provide an API hosted on an Amazon EC2 instance. The EC2 instance performs the required computations when the API request is made.
- Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax computations.
- Create an Application Load Balancer that has two Amazon EC2 instances behind it. The EC2 instances will compute the tax on the received item names.
- Design a REST API using Amazon API Gateway that connects with an API hosted on an Amazon EC2 instance. API Gateway accepts and passes the item names to the EC2 instance for tax computations.
- Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax computations.
API = API Gateway = 2,4
4 = not scalable
2 = lambda = scales
An application is running on an Amazon EC2 instance and must have millisecond latency when running the workload. The application makes many small reads and writes to the file system, but the file system itself is small.
Which Amazon Elastic Block Store (Amazon EBS) volume type should a solutions architect attach to their EC2 instance?
- Cold HDD (sc1)
- General Purpose SSD (gp2)
- Provisioned IOPS SSD (io1)
- Throughput Optimized HDD (st1)
- Provisioned IOPS SSD (io1)
1 ms = fast = io1
GP2 has 10 millisecond latency.
https://www.amazonaws.cn/ebs/faqs/
A solutions architect is designing a multi-Region disaster recovery solution for an application that will provide public API access. The application will use Amazon EC2 instances with a userdata script to load application code and an Amazon RDS for MySQL database. The Recovery Time Objective (RTO) is 3 hours and the Recovery Point Objective (RPO) is 24 hours.
Which architecture would meet these requirements at the LOWEST cost?
- Use an Application Load Balancer for Region failover. Deploy new EC2 instances with the userdata script. Deploy separate RDS instances in each Region.
- Use Amazon Route 53 for Region failover. Deploy new EC2 instances with the userdata script. Create a read replica of the RDS instance in a backup Region.
- Use Amazon API Gateway for the public APIs and Region failover. Deploy new EC2 instances with the userdata script. Create a MySQL read replica of the RDS instance in a backup Region.
- Use Amazon Route 53 for Region failover. Deploy new EC2 instances with the userdata script for APIs, and create a snapshot of the RDS instance daily for a backup. Replicate the snapshot to a backup Region.
- Use Amazon Route 53 for Region failover. Deploy new EC2 instances with the userdata script for APIs, and create a snapshot of the RDS instance daily for a backup. Replicate the snapshot to a backup Region.
multi-Region disaster recovery = Replicate the snapshot to a backup Region
LOWEST cost = 4
https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/