Question3 Flashcards

1
Q

A company is deploying a web service that will provide read and write access to structured data. The company expects there to be variable usage patterns with some short but significant spikes. The service must dynamically scale and must be fault tolerant across multiple AWS Regions.
Which actions should a Solutions Architect take to meet these requirements?
• ​
Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. Run the web service in both Regions as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an alias record and a latency-based routing policy with health checks to distribute traffic between the two ALBs.
• ​
Store the data in Amazon S3 buckets in two Regions and configure cross-Region replication. Create an Amazon CloudFront distribution that points to multiple origins. Use Amazon API Gateway and AWS Lambda for the web frontend and configure Amazon Route 53 with an alias record pointing to the REST API.
• ​
Store the data in Amazon Aurora global databases. Add Auto Scaling replicas to both Regions. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. In Amazon Route 53, configure an alias record and a multi-value routing policy.
• ​
Store the data in Amazon DocumentDB in two Regions. Use AWS DMS to synchronize data between databases. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. In Amazon Route 53, configure an alias record and a failover routing policy.

A

• ​
Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. Run the web service in both Regions as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an alias record and a latency-based routing policy with health checks to distribute traffic between the two ALBs.
(Correct)

Explanation
The DynamoDB global tables solution is the only database solution that will allow writes in both AWS Regions. Using Amazon ECS Fargate tasks with Auto Scaling will ensure the compute layer scales appropriately and an ALB will distribute connections across multiple tasks.
This solution is active-active and so a latency-based routing policy will direct users to the closest Region to improve performance. Health checks are enabled which means that if a Region outage occurs, traffic will be directed to the second Region.
CORRECT:”Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. Run the web service in both Regions as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an alias record and a latency-based routing policy with health checks to distribute traffic between the two ALBs” is correct.
INCORRECT:”Store the data in Amazon Aurora global databases. Add Auto Scaling replicas to both Regions. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. In Amazon Route 53, configure an alias record and a multi-value routing policy” is incorrect. Aurora global databases provide read access in multiple Regions but writes can only be made in one Region. In this solution the multi-value routing policy will direct connections to healthy ALBs in both Regions and so any attempts to write in a Region without that capability will fail.
INCORRECT:”Store the data in Amazon DocumentDB in two Regions. Use AWS DMS to synchronize data between databases. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. In Amazon Route 53, configure an alias record and a failover routing policy” is incorrect. DocumentDB is used for storing JSON data not structured data so is not suitable.
INCORRECT:”Store the data in Amazon S3 buckets in two Regions and configure cross-Region replication. Create an Amazon CloudFront distribution that points to multiple origins. Use Amazon API Gateway and AWS Lambda for the web frontend and configure Amazon Route 53 with an alias record pointing to the REST API” is incorrect. When using CloudFront with multiple origins, behaviors must be configured to direct traffic to a specific origin, this would not work when the content is the same – it does not function as a load balancer or provide automatic failover.
References:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-security-identity-compliance/
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-database/
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-database/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A tech startup is planning to launch a new global mobile marketplace using AWS Amplify and AWS Mobile Hub. To lower the latency, the backend APIs will be launched to multiple AWS regions to process the sales and financial transactions on the region closest to the users. You are instructed to design the system architecture to ensure that the transactions made in one region are automatically replicated to other regions. In the coming months ahead, it is expected that the marketplace will have millions of users across North America, South America, Europe, and Asia.
Which of the following is the most scalable, cost-effective and highly available architecture that you should implement?
• ​
In each local region, store the individual transactions to a DynamoDB table. Set up an AWS Lambda function to read recent writes from the table, and replay the data to DynamoDB tables in all other regions.
• ​
Use a combination of AWS Control Tower and Amazon Connect to launch and centrally manage multiple DynamoDB tables in various AWS Regions. In each local region, store the individual transactions to a DynamoDB replica table in the same region.
• ​
Create an Amazon Aurora Multi-Master database on all required regions. Store the individual transactions to the Amazon Aurora instance in the local region. Replicate the transactions table between regions using Aurora replication. In this set up, any changes made in one of the tables will be automatically replicated across all other tables.
• ​
Create a Global DynamoDB table with replica tables across several AWS regions that you prefer. In each local region, store the individual transactions to a DynamoDB replica table in the same region. Any changes made in one of the replica tables will automatically be replicated across all other tables.

A

• ​
Create a Global DynamoDB table with replica tables across several AWS regions that you prefer. In each local region, store the individual transactions to a DynamoDB replica table in the same region. Any changes made in one of the replica tables will automatically be replicated across all other tables.
(Correct)

Explanation
Global Tables builds upon DynamoDB’s global footprint to provide you with a fully managed, multi-region, and multi-master database that provides fast, local, read and write performance for massively scaled, global applications. Global Tables replicates your Amazon DynamoDB tables automatically across your choice of AWS regions.
Global Tables eliminates the difficult work of replicating data between regions and resolving update conflicts, enabling you to focus on your application’s business logic. In addition, Global Tables enables your applications to stay highly available even in the unlikely event of isolation or degradation of an entire region.
Hence, the option that says:Create a Global DynamoDB table with replica tables across several AWS regions that you prefer. In each local region, store the individual transactions to a DynamoDB replica table in the same region. Any changes made in one of the replica tables will automatically be replicated across all other tablesis the correct answer.

The option that says:In each local region, store the individual transactions to a DynamoDB table. Set up an AWS Lambda function to read recent writes from the table, and replay the data to DynamoDB tables in all other regionsis incorrect because using an AWS Lambda function to replicate all data across regions is not a scalable solution. Remember that there will be millions of users who will use the mobile app around the world, and this entails a lot of replication and compute capacity for a single Lambda function. In this scenario, the best solution is to use Global DynamoDB tables with DynamoDB Stream option enabled to automatically handle the replication process.
The option that says:Use a combination of AWS Control Tower and Amazon Connect to launch and centrally manage multiple DynamoDB tables in various AWS Regions. In each local region, store the individual transactions to a DynamoDB replica table in the same regionis incorrect because Amazon Connect is just an easy to use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost, while AWS Control Tower just offers the easiest way to set up and govern a new, secure, multi-account AWS environment. You can’t use these two services to set up a Global DynamoDB Table.
The option that says:Create an Amazon Aurora Multi-Master database on all required regions. Store the individual transactions to the Amazon Aurora instance in the local region. Replicate the transactions table between regions using Aurora replication. In this setup, any changes made in one of the tables will be automatically replicated across all other tablesis incorrect because, by default, all DB instances in a multi-master cluster must be in the same AWS Region and you can’t enable cross-region replicas from multi-master clusters. In addition, DynamoDB provides better global scalability for mobile applications compared to Amazon Aurora.

References:

https: //docs.aws.amazon.com/amazondynamodb/latest/developerguide/globaltables_HowItWorks.html
https: //aws.amazon.com/dynamodb/global-tables/

Check out this Amazon DynamoDB Cheat Sheet:
https://tutorialsdojo.com/amazon-dynamodb/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A new application will ingest millions of records per minute from user devices all over the world. Each record is less than 4 KB in size and must be stored durably and accessed with low latency. The data must be stored for 90 days after which it can be deleted. It has been estimated that storage requirements for a year will be 15-20TB.
Which storage strategy is the MOST cost-effective and meets the design requirements?
• ​
Store each incoming record as a single .csv file in an Amazon S3 bucket. Configure a lifecycle policy to delete data order than 90 days.
• ​
Store each incoming record in a single table in an Amazon RDS MySQL database. Run a nightly cron job that executes a query to delete any records older than 90 days.
• ​
Store each incoming record in an Amazon DynamoDB table. Configure the DynamoDB Time to Live (TTL) feature to delete records older than 90 days.
• ​
Store the records in an Amazon Kinesis Data Stream. Configure the Time to Live (TTL) feature to delete records older then 90 days.

A

• ​
Store each incoming record in an Amazon DynamoDB table. Configure the DynamoDB Time to Live (TTL) feature to delete records older than 90 days.
(Correct)
• ​

Explanation
Amazon DynamoDB is a suitable data store as it can scale to the throughput required and offers low latency. The TTL feature can be used to expire data.
Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput. TTL is provided at no extra cost as a means to reduce stored data volumes by retaining only the items that remain current for your workload’s needs.
The image below depicts a table with an “expiry” column that specifies the expiry date in Epoch format:

CORRECT:”Store each incoming record in an Amazon DynamoDB table. Configure the DynamoDB Time to Live (TTL) feature to delete records older than 90 days” is the correct answer.
INCORRECT:”Store each incoming record as a single .csv file in an Amazon S3 bucket. Configure a lifecycle policy to delete data order than 90 days” is incorrect. The maximum object size in S3 is 5TB. Also, you cannot expire entries within a file, only the entire file (object).
INCORRECT:”Store each incoming record in a single table in an Amazon RDS MySQL database. Run a nightly cron job that executes a query to delete any records older than 90 days” is incorrect. DynamoDB is more suitable as it has a native feature for expiring data and is better suited to this kind of workload than an SQL database.
INCORRECT:”Store the records in an Amazon Kinesis Data Stream. Configure the Time to Live (TTL) feature to delete records older then 90 days” is incorrect. You cannot store records in a Kinesis data stream for 90 days, the maximum is 7 days.
References:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-database/

From

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A financial company processes transactions using on-premises application servers which save output to an Amazon DynamoDB table. The company’s data center is connected to AWS using an AWS Direct Connect (DX) connection. Company managed has mandated that the solution should be available across multiple Regions. Consistent network performance must be maintained at all times.
What changes should the company make to meet these requirements?
• ​
Use an AWS managed VPN to connect to a second AWS Region. Create a copy of the DynamoDB table in the second Region. Enable DynamoDB streams in the primary Region and use AWS Lambda to synchronize data to the copied table.
• ​
Create a DX connection to a second AWS Region. Create an identical DynamoDB table in the second Region. Enable DynamoDB auto scaling to manage throughput capacity. Modify the application to write to the second Region.
• ​
Use an AWS managed VPN to connect to a second AWS Region. Create a copy of the DynamoDB table in the second Region. Enable DynamoDB streams in the primary Region and use AWS DMS to synchronize data to the copied table.
• ​
Create a DX connection to a second AWS Region. Use DynamoDB global tables to replicate data to the second Region. Modify the application to fail over to the second Region.

A

• ​
Create a DX connection to a second AWS Region. Use DynamoDB global tables to replicate data to the second Region. Modify the application to fail over to the second Region.
(Correct)

Explanation
To ensure consistent network performance and AWS Direct Connect connection must be used as AWS Managed VPN relies on the public internet which cannot offer consistent performance.
Amazon DynamoDB Global Tables is a fully managed, multi-region, multi-active database. This means you can read and write to multiple Regions. In the event of the failure of a Region the application logic must be set to fail to an endpoint in another Region where a replica table is running.
The diagram below depicts how DynamoDB replicates data between Regions and accepts reads and writes:

CORRECT:”Create a DX connection to a second AWS Region. Use DynamoDB global tables to replicate data to the second Region. Modify the application to fail over to the second Region” is the correct answer.
INCORRECT:”Use an AWS managed VPN to connect to a second AWS Region. Create a copy of the DynamoDB table in the second Region. Enable DynamoDB streams in the primary Region and use AWS Lambda to synchronize data to the copied table” is incorrect. A VPN does not offer consistent performance and DynamoDB global tables should be used for a multi-active database.
INCORRECT:”Create a DX connection to a second AWS Region. Create an identical DynamoDB table in the second Region. Enable DynamoDB auto scaling to manage throughput capacity. Modify the application to write to the second Region” is incorrect. This does not offer any solution for creating a synchronized copy of the database in a second Region.
INCORRECT:”Use an AWS managed VPN to connect to a second AWS Region. Create a copy of the DynamoDB table in the second Region. Enable DynamoDB streams in the primary Region and use AWS DMS to synchronize data to the copied table” is incorrect. A VPN does not offer consistent performance and AWS DMS should be replaced with DynamoDB global tables.
References:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/V2globaltables_HowItWorks.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-networking-content-delivery/

From

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company captures financial transactions in Amazon DynamoDB tables. The security team is concerned about identifying fraudulent behavior and has requested that all changes to items stored in DynamoDB tables must be logged within 30 minutes.
How can a Solutions Architect meet this requirement?
• ​
Copy the DynamoDB tables into Apache Hive tables on Amazon EMR every hour and analyze them for anomalous behaviors. Send Amazon SNS notifications when anomalous behaviors are detected.
• ​
Use Amazon DynamoDB Streams to capture and send updates to AWS Lambda. Create a Lambda function to output records to Amazon Kinesis Data Streams. Analyze any anomalies with Amazon Kinesis Data Analytics. Send SNS notifications when anomalous behaviors are detected.

• ​
Use event patterns in Amazon CloudWatch Events to capture DynamoDB API call events with an AWS Lambda function as a target to analyze behavior. Send SNS notifications when anomalous behaviors are detected.
• ​
Use AWS CloudTrail to capture all the APIs that change the DynamoDB tables. Send SNS notifications when anomalous behaviors are detected using CloudTrail event filtering.

A

• ​
Use Amazon DynamoDB Streams to capture and send updates to AWS Lambda. Create a Lambda function to output records to Amazon Kinesis Data Streams. Analyze any anomalies with Amazon Kinesis Data Analytics. Send SNS notifications when anomalous behaviors are detected.
(Correct)

Explanation
DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table. DynamoDB is integrated with AWS Lambda so that you can createtriggers—pieces of code that automatically respond to events in DynamoDB Streams.
With triggers, you can build applications that react to data modifications in DynamoDB tables. In this case the Lambda function can process the data and place it in a Kinesis Data Stream where Data Analytics can analyze the data and send an SNS notification if any fraudulent behavior is detected.
The diagram below depicts how data is updated in the table (1), the modification is added to the stream (2), and then AWS Lambda processes the record (3). The record of the update can then be analyzed using Kinesis (not shown).

CORRECT:”Use Amazon DynamoDB Streams to capture and send updates to AWS Lambda. Create a Lambda function to output records to Amazon Kinesis Data Streams. Analyze any anomalies with Amazon Kinesis Data Analytics. Send SNS notifications when anomalous behaviors are detected” is the correct answer.
INCORRECT:”Use event patterns in Amazon CloudWatch Events to capture DynamoDB API call events with an AWS Lambda function as a target to analyze behavior. Send SNS notifications when anomalous behaviors are detected” is incorrect. To capture item-level modification DynamoDB streams should be used, capturing API calls will not help.
INCORRECT:”Copy the DynamoDB tables into Apache Hive tables on Amazon EMR every hour and analyze them for anomalous behaviors. Send Amazon SNS notifications when anomalous behaviors are detected” is incorrect. This solution uses the wrong features and tools for the job and is not as automated as the correct answer.
INCORRECT:”Use AWS CloudTrail to capture all the APIs that change the DynamoDB tables. Send SNS notifications when anomalous behaviors are detected using CloudTrail event filtering” is incorrect. Capturing API calls does not give you the information you need at an item-level. To capture the changes to the items you must use DynamoDB streams.
References:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-database/

From

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company has deployed an application that uses an Amazon DynamoDB table and the user base has increased significantly. Users have reported poor response times during busy periods but no error pages have been generated. The application uses Amazon DynamoDB in read-only mode. The operations team has determined that the issue relates to ProvisionedThroughputExceeded exceptions in the application logs when doing Scan and read operations.
A Solutions Architect has been tasked with improving application performance. Which solutions will meet these requirements whilst MINIMIZING changes to the application? (Select TWO.)
• ​
Include error retries and exponential backoffs in the application code to handle throttling errors and reduce load during periods of high requests.
• ​
Enable adaptive capacity for the DynamoDB table to minimize throttling due to throughput exceptions.
• ​
Enable DynamoDB Auto Scaling to manage the throughput capacity as table traffic increases. Set the upper and lower limits to control costs and set a target utilization based on the peak usage.

• ​
Provision an Amazon ElastiCache for Redis cluster. The cluster should be provisioned with enough shards to handle the peak application load.
• ​
Provision a DynamoDB Accelerator (DAX) cluster with the correct number and type of nodes. Tune the item and query cache configuration for an optimal user experience.

A

• ​
Enable DynamoDB Auto Scaling to manage the throughput capacity as table traffic increases. Set the upper and lower limits to control costs and set a target utilization based on the peak usage.
(Correct)

• ​
Provision a DynamoDB Accelerator (DAX) cluster with the correct number and type of nodes. Tune the item and query cache configuration for an optimal user experience.
(Correct)

Explanation
When a ProvisionedThroughputExceeded error is generated it means that insufficient throughput has been enabled on the table. In this case that would be insufficient read capacity units (RCUs). Enabling DynamoDB Auto Scaling will ensure that the RCUs are adjusted based on load.
You configure Auto Scaling by specifying the minimum and maximum capacity units and the target utilization as you can see in the image below:

Another great addition to the solution is to create an Amazon DynamoDB Accelerator (DAX) cluster. DAX is a caching solution for DynamoDB that can be placed in front of the database. This will provide much improved read performance without any application changes.
CORRECT:”Enable DynamoDB Auto Scaling to manage the throughput capacity as table traffic increases. Set the upper and lower limits to control costs and set a target utilization based on the peak usage” is a correct answer.
CORRECT:”Provision a DynamoDB Accelerator (DAX) cluster with the correct number and type of nodes. Tune the item and query cache configuration for an optimal user experience” is also a correct answer.
INCORRECT:”Enable adaptive capacity for the DynamoDB table to minimize throttling due to throughput exceptions” is incorrect. Adaptive capacity is enabled automatically for every DynamoDB table, at no additional cost. You don’t need to explicitly enable or disable it.
INCORRECT:”Provision an Amazon ElastiCache for Redis cluster. The cluster should be provisioned with enough shards to handle the peak application load” is incorrect. DAX is a better solution for a DynamoDB table as it works without any code changes which is preferred in this scenario.
INCORRECT:”Include error retries and exponential backoffs in the application code to handle throttling errors and reduce load during periods of high requests” is incorrect. This option also requires application code changes so should be avoided in this scenario.
References:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.concepts.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-database/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company has a mobile application that uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The application is write intensive and costs have recently increased significantly. The biggest increase in cost has been for the AWS Lambda functions. Application utilization is unpredictable but has been increasing steadily each month.
A Solutions Architect has noticed that the Lambda function execution time averages over 4 minutes. This is due to wait time for a high-latency network call to an on-premises MySQL database. A VPN is used to connect to the VPC.
How can the Solutions Architect reduce the cost of the current architecture?
• ​
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Enable local caching in the mobile application to reduce the Lambda function invocation calls.
- Offload the frequently accessed records from DynamoDB to Amazon ElastiCache.
• ​
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
- Enable API caching on API Gateway to reduce the number of Lambda function invocations.
- Enable Auto Scaling in DynamoDB.
• ​
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Cache the API Gateway results to Amazon CloudFront.
- Use Amazon EC2 Reserved Instances instead of Lambda.
- Enable Auto Scaling on EC2 and use Spot Instances during peak times.
- Enable DynamoDB Auto Scaling to manage target utilization.
• ​
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
- Enable caching of the Amazon API Gateway results in Amazon CloudFront to reduce the number of Lambda function invocations.
-Enable DynamoDB Accelerator for frequently accessed records and enable the DynamoDB Auto Scaling feature.

A

• ​
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
- Enable API caching on API Gateway to reduce the number of Lambda function invocations.
- Enable Auto Scaling in DynamoDB.
(Correct)

Explanation
The best way to reduce the latency of the network call to the on-premises database is to move the database to AWS using Amazon RDS. Additionally, API caching will cache the API responses for an API Gateway stage which further improves performance. Finally, enabling Auto Scaling in DynamoDB ensures that the read and write capacity of the table will adjust according to load which further increases cost efficiency.
CORRECT:
· Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
· Enable API caching on API Gateway to reduce the number of Lambda function invocations.
· Enable Auto Scaling in DynamoDB.
INCORRECT:
· Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
· Enable local caching in the mobile application to reduce the Lambda function invocation calls.
· Offload the frequently accessed records from DynamoDB to Amazon ElastiCache.
AWS Direct Connect will reduce latency however it comes at a significant cost. Using local caching in the mobile application may ensure some performance benefits but will not prevent the high-latency network calls from happening. ElastiCache can be used to cache DynamoDB table contents however DynamoDB Accelerator may be easier to implement in front of DDB.
INCORRECT:
· Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
· Cache the API Gateway results to Amazon CloudFront.
· Use Amazon EC2 Reserved Instances instead of Lambda.
· Enable Auto Scaling on EC2 and use Spot Instances during peak times.
· Enable DynamoDB Auto Scaling to manage target utilization.
API Gateway results cannot be cached in CloudFront. EC2 RIs are unlikely to be more cost efficient compared to Lambda functions. The key is to prevent the high-latency network calls from occurring which will be the best resolution to the problem. Auto Scaling Spot instances and DynamoDB Auto Scaling are both valid options for cost and performance optimization.
INCORRECT:
· Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
· Enable caching of the Amazon API Gateway results in Amazon CloudFront to reduce the number of Lambda function invocations.
· Enable DynamoDB Accelerator for frequently accessed records and enable the DynamoDB Auto Scaling feature.
You cannot cache API Gateway results in CloudFront. Otherwise this is a good solution.
References:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html
https://aws.amazon.com/getting-started/hands-on/create-mysql-db/
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-application-integration/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

An eCommerce company runs an application that records product registration information. The application uses an Amazon S3 bucket for storing files and an Amazon DynamoDB table to store customer record data. The application software runs in us-west-1 and eu-central-1. The S3 bucket and DynamoDB table are in us-west-1. A Solutions Architect has been asked to implement protection from data corruption and the loss of connectivity to either Region.
Which solution meets these requirements?
• ​
Create a DynamoDB global table to replicate data between us-west-1 and eu-central-1. Enable versioning on the S3 bucket. Implement strict ACLs on the S3 bucket.
• ​
Create a DynamoDB global table to replicate data between us-west-1 and eu-central-1. Enable continuous backup on the DynamoDB table in us-west-1 . Enable versioning on the S3 bucket.
• ​
Create an AWS Lambda function triggered by Amazon CloudWatch Events to make regular backups of the DynamoDB table. Set up S3 cross-region replication from us-west-1 to eu-central-1. Set up MFA delete on the S3 bucket in us-west-1.
• ​
Create a DynamoDB global table to replicate data between us-west-1 and eu-central-1. Enable continuous backup on the DynamoDB table in us-west-1 . Set up S3 cross-region replication from us-west-1 to eu-central-1.
(Correct)

A

• ​
Create a DynamoDB global table to replicate data between us-west-1 and eu-central-1. Enable continuous backup on the DynamoDB table in us-west-1 . Set up S3 cross-region replication from us-west-1 to eu-central-1.
(Correct)

Explanation
A DynamoDB global table is a multi-region, multi-active database. This means you can create a table and write to that table in multiple Regions and AWS synchronizes the items. This provides the required redundancy for the database table. DynamoDB continuous backups can be enabled. This provides per-second granularity and restore to any single second from the time PITR was enabled up to the prior 35 days. This protects against data corruption.
For Amazon S3 you can enable cross-Region replication which requires versioning is enabled. This provides synchronization of changes and also versioning history in case of data corruption.
CORRECT:”Create a DynamoDB global table to replicate data between us-west-1 and eu-central-1. Enable continuous backup on the DynamoDB table in us-west-1 . Set up S3 cross-region replication from us-west-1 to eu-central-1” is the correct answer.
INCORRECT:”Create an AWS Lambda function triggered by Amazon CloudWatch Events to make regular backups of the DynamoDB table. Set up S3 cross-region replication from us-west-1 to eu-central-1. Set up MFA delete on the S3 bucket in us-west-1” is incorrect. There’s no need to use Lambda when you can just enable continuous backups.. MFA delete will not protect against data corruption.
INCORRECT:”Create a DynamoDB global table to replicate data between us-west-1 and eu-central-1. Enable continuous backup on the DynamoDB table in us-west-1 . Enable versioning on the S3 bucket” is incorrect. Versioning will help with data corruption but there’s no solution here to replicate the data to another Region.
INCORRECT:”Create a DynamoDB global table to replicate data between us-west-1 and eu-central-1. Enable versioning on the S3 bucket. Implement strict ACLs on the S3 bucket” is incorrect. ACLs cannot protect against data corruption and there’s no solution for protecting the data in the DynamoDB table from corruption here.
References:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html
https://aws.amazon.com/blogs/aws/new-amazon-dynamodb-continuous-backups-and-point-in-time-recovery-pitr/
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-database/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company wants to host a web application on AWS. The application will be used by users around the world. A Solutions Architect has been given the following design requirements:
· Allow the retrieval of data from multiple data sources.
· Minimize the cost of API calls.
· Reduce latency for user access.
· Provide user authentication and authorization and implement role-based access control.
· Implement a fully serverless solution.
How can the Solutions Architect meet these requirements?
• ​
Use Amazon CloudFront with Amazon EC2 to host the web application. Use Amazon API Gateway to build the application APIs. Use AWS Lambda for custom authentication and authorization. Authorize data access by leveraging IAM roles.
• ​
Use Amazon CloudFront with Amazon S3 to host the web application. Use Amazon API Gateway to build the application APIs with AWS Lambda for the custom authorizer. Authorize data access by performing user lookup in AWS Managed Microsoft AD.
• ​
Use Amazon CloudFront with Amazon FSx to host the web application. Use AWS AppSync to build the application APIs. Use IAM groups for RBAC. Authorize data access by leveraging IAM groups in AWS AppSync resolvers.
• ​
Use Amazon CloudFront with Amazon S3 to host the web application. Use AWS AppSync to build the application APIs. Use Amazon Cognito groups for RBAC. Authorize data access by leveraging Cognito groups in AWS AppSync resolvers.

A

• ​
Use Amazon CloudFront with Amazon S3 to host the web application. Use AWS AppSync to build the application APIs. Use Amazon Cognito groups for RBAC. Authorize data access by leveraging Cognito groups in AWS AppSync resolvers.
(Correct)

Explanation
CloudFront with S3 provides a low-latency solution for global users to access the web application. AWS AppSync can be used to provide a GraphQL API that can be used to query multiple databases, microservices, and APIs (allow the retrieval of data from multiple data sources).

Amazon Cognito Groups can be used to create collections of users to manage their permissions or to represent different types of users. You can assign an AWS Identity and Access Management (IAM) role to a group to define the permissions for members of a group.

AWS AppSync GraphQL resolvers connect the fields in a type’s schema to a data source. Resolvers are the mechanism by which requests are fulfilled. Cognito groups can be used with resolvers to provide authorization based on identity.

CORRECT: “Use Amazon CloudFront with Amazon S3 to host the web application. Use AWS AppSync to build the application APIs. Use Amazon Cognito groups for RBAC. Authorize data access by leveraging Cognito groups in AWS AppSync resolvers” is the correct answer.

INCORRECT: “Use Amazon CloudFront with Amazon S3 to host the web application. Use Amazon API Gateway to build the application APIs with AWS Lambda for the custom authorizer. Authorize data access by performing user lookup in AWS Managed Microsoft AD” is incorrect. AppSync is a better fit than using API Gateway due to the requirement to retrieve data from multiple data sources.

INCORRECT: “Use Amazon CloudFront with Amazon FSx to host the web application. Use AWS AppSync to build the application APIs. Use IAM groups for RBAC. Authorize data access by leveraging IAM groups in AWS AppSync resolvers” is incorrect. You cannot point CloudFront at an Amazon FSx file system.

INCORRECT: “Use Amazon CloudFront with Amazon EC2 to host the web application. Use Amazon API Gateway to build the application APIs. Use AWS Lambda for custom authentication and authorization. Authorize data access by leveraging IAM roles” is incorrect. EC2 is not serverless so should not be used in this solution.

References:

https://docs.aws.amazon.com/appsync/latest/devguide/security-authorization-use-cases.html

Save time with our exam-specific cheat sheets:

https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-front-end-web-mobile/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A Solutions Architect has deployed a REST API using an Amazon API Gateway Regional endpoint. The API will be consumed by a growing number of US-based companies. Each company will use the API twice each day to get the latest data.
Following the deployment of the API the operations team noticed thousands of requests coming from hundreds of IP addresses around the world. The traffic is believed to be originating from a botnet. The Solutions Architect must secure the API while minimizing cost.
Which approach should the company take to secure its API?
• ​
Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Configure CloudFront with an origin access identity (OAI) and associate it with the distribution. Configure API Gateway to ensure only the OAI can execute the GET method.
• ​
Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Add a custom header to the CloudFront distribution populated with an API key. Configure the API to require an API key on the GET method.
• ​
Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the companies. Associate the web ACL with the API. Create a resource policy with a request limit and associate it with the API. Configure the API to require an API key on the POST method.
• ​
Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the companies. Associate the web ACL with the API. Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.

A

• ​
Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the companies. Associate the web ACL with the API. Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.
(Correct)

Explanation
The rate-based rules associated with usage plans specify the number of web requests that are allowed by each client IP in a trailing, continuously updated, 5-minute period. The API key associated with the usage plan ensures that only clients who are using the API key in their requests are granted access. This solution requires that the IP addresses of clients are whitelisted and the API key is distributed to clients to use in their requests to the API.
CORRECT:”Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the companies. Associate the web ACL with the API. Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan” is correct.
INCORRECT:”Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Add a custom header to the CloudFront distribution populated with an API key. Configure the API to require an API key on the GET method” is incorrect answer.
A rate-based rule tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. You set the limit as the number of requests per 5-minute time span. You cannot configure an AWS WAF rate-based rule to limit request to 10 per day. Also, to minimize cost CloudFront is not required in this solution.
INCORRECT:”Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Configure CloudFront with an origin access identity (OAI) and associate it with the distribution. Configure API Gateway to ensure only the OAI can execute the GET method” is incorrect.
As above for rate-based rules. An OAI is a special CloudFront user that is used with Amazon S3 buckets to prevent direct access using S3 URLs. It is usually used along with other protections such as signed URLs and signed cookies. It is not possible to use an OAI with API Gateway APIs.
INCORRECT:”Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the companies. Associate the web ACL with the API. Create a resource policy with a request limit and associate it with the API. Configure the API to require an API key on the POST method” is incorrect.
AWS WAF resource policies control whether or not a principal or source IP address/CIDR block is allowed to invoke the API. A resource policy does not have a request limit associated with it, use a Web ACL rate-based rule for that. The API key should be configured on the GET method as this API is being used to get data and not post it.
References:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies.html
https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-aws-waf.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-networking-content-delivery/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly