Neal Davis - Practice Test 3 - Incorrect Flashcards
Question 5:
A Solutions Architect created the following policy and associated to an AWS IAM group containing several administrative users:
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: “ec2:TerminateInstances”,
“Resource”: “”,
“Condition”: {
“IpAddress”: {
“aws:SourceIp”: “10.1.2.0/24”
}
}
},
{
“Effect”: “Deny”,
“Action”: “ec2:”,
“Resource”: “*”,
“Condition”: {
“StringNotEquals”: {
“ec2:Region”: “us-east-1”
}
}
}
]
}
What is the effect of this policy?
A: Administrators can terminate an EC2 instance in any AWS Region except us-east-1.
B: Administrators can terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.1.2.28.
C: Administrators cannot terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.1.2.28.
D: Administrators can terminate an EC2 instance with the IP address 10.1.2.5 in the us-east-1 Region.
Explanation
The Condition element (or Condition block) lets you specify conditions for when a policy is in effect. The Condition element is optional. In the Condition element, you build expressions in which you use condition operators (equal, less than, etc.) to match the condition keys and values in the policy against keys and values in the request context.
In this policy statement the first block allows the “ec2:TerminateInstances” API action only if the IP address of the requester is within the “10.1.2.0/24” range. This is specified using the “aws:SourceIp” condition.
The second block denies all EC2 API actions with a conditional operator (StringNotEquals) that checks the Region the request is being made in (“ec2:Region”). If the Region is any value other than us-east-1 the request will be denied. If the Region the request is being made in is us-east-1 the request will not be denied.
CORRECT B: “Administrators can terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.1.2.28” is the correct answer.
INCORRECT: “Administrators cannot terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.1.2.28” is incorrect. This is not true; the conditions allow this action.
INCORRECT: “Administrators can terminate an EC2 instance in any AWS Region except us-east-1” is incorrect. The API action to terminate instances only has a condition of the source IP. If the source IP is in the range it will allow. The second block only denies API actions if the Region is NOT us-east-1. Therefore, the user can terminate instances in us-east-1
INCORRECT: “Administrators can terminate an EC2 instance with the IP address 10.1.2.5 in the us-east-1 Region” is incorrect. The aws:SourceIp condition is checking the IP address of the requester (where you’re making the call from), not the resource you want to terminate.
Question 12:
An online store uses an Amazon Aurora database. The database is deployed as a Multi-AZ deployment. Recently, metrics have shown that database read requests are high and causing performance issues which result in latency for write requests.
What should the solutions architect do to separate the read requests from the write requests?
A: Update the application to read from the Aurora Replica
B: Enable read through caching on the Amazon Aurora database
C: Create a second Amazon Aurora database and link it to the primary database as a read replica
D: Create a read replica and modify the application to use the appropriate endpoint
Explanation
Aurora Replicas are independent endpoints in an Aurora DB cluster, best used for scaling read operations and increasing availability. Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region.
The DB cluster volume is made up of multiple copies of the data for the DB cluster. However, the data in the cluster volume is represented as a single, logical volume to the primary instance and to Aurora Replicas in the DB cluster.
As well as providing scaling for reads, Aurora Replicas are also targets for multi-AZ. In this case the solutions architect can update the application to read from the Aurora Replica
CORRECT: “Update the application to read from the Aurora Replica” is the correct answer.
INCORRECT: “Create a read replica and modify the application to use the appropriate endpoint” is incorrect. An Aurora Replica is both a standby in a Multi-AZ configuration and a target for read traffic. The architect simply needs to direct traffic to the Aurora Replica.
INCORRECT: “Enable read through caching on the Amazon Aurora database.” is incorrect as this is not a feature of Amazon Aurora.
INCORRECT: “Create a second Amazon Aurora database and link it to the primary database as a read replica” is incorrect as an Aurora Replica already exists as this is a Multi-AZ configuration and the standby is an Aurora Replica that can be used for read traffic.
Question 18:
A company requires a high-performance file system that can be mounted on Amazon EC2 Windows instances and Amazon EC2 Linux instances. Applications running on the EC2 instances perform separate processing of the same files and the solution must provide a file system that can be mounted by all instances simultaneously.
Which solution meets these requirements?
A: Use Amazon Elastic File System (Amazon EFS) with General Purpose performance mode for the Windows instances and the Linux instances.
B: Use Amazon FSx for Windows File Server for the Windows instances and the Linux instances.
C: Use Amazon FSx for Windows File Server for the Windows instances. Use Amazon Elastic File System (Amazon EFS) with Max I/O performance mode for the Linux instances.
D: Use Amazon FSx for Windows File Server for the Windows instances. Use Amazon FSx for Lustre for the Linux instances. Link both Amazon FSx file systems to the same Amazon S3 bucket.
Explanation
Amazon FSx for Windows File Server provides a fully managed native Microsoft Windows file system so you can easily move your Windows-based applications that require shared file storage to AWS. You can easily connect Linux instances to the file system by installing the cifs-utils package. The Linux instances can then mount an SMB/CIFS file system.
CORRECT: “Use Amazon FSx for Windows File Server for the Windows instances and the Linux instances” is the correct answer.
INCORRECT: “Use Amazon FSx for Windows File Server for the Windows instances. Use Amazon Elastic File System (Amazon EFS) with Max I/O performance mode for the Linux instances” is incorrect. This solution results in two separate file systems and a shared file system is required.
INCORRECT: “Use Amazon Elastic File System (Amazon EFS) with General Purpose performance mode for the Windows instances and the Linux instances” is incorrect. You cannot use Amazon EFS for Windows instances as this is not supported.
INCORRECT: “Use Amazon FSx for Windows File Server for the Windows instances. Use Amazon FSx for Lustre for the Linux instances. Link both Amazon FSx file systems to the same Amazon S3 bucket” is incorrect. Amazon FSx for Windows File Server does not use Amazon S3 buckets, so this is another solution that results in separate file systems.
Question 19:
An Architect needs to find a way to automatically and repeatably create many member accounts within an AWS Organization. The accounts also need to be moved into an OU and have VPCs and subnets created.
What is the best way to achieve this?
A: Use the AWS Organizations API
B: Use CloudFormation with scripts
C: Use the AWS Management Console
D: Use the AWS CLI
Explanation
The best solution is to use a combination of scripts and AWS CloudFormation. You will also leverage the AWS Organizations API. This solution can provide all of the requirements.
CORRECT: “Use CloudFormation with scripts” is the correct answer.
INCORRECT: “Use the AWS Organizations API” is incorrect. You can create member accounts with the AWS Organizations API. However, you cannot use that API to configure the account and create VPCs and subnets.
INCORRECT: “Use the AWS Management Console” is incorrect. Using the AWS Management Console is not a method of automatically creating the resources.
INCORRECT: “Use the AWS CLI” is incorrect. You can do all tasks using the AWS CLI but it is better to automate the process using AWS CloudFormation.
Question 24:
A company runs a containerized application on an Amazon Elastic Kubernetes Service (EKS) using a microservices architecture. The company requires a solution to collect, aggregate, and summarize metrics and logs. The solution should provide a centralized dashboard for viewing information including CPU and memory utilization for EKS namespaces, services, and pods.
Which solution meets these requirements?
A: Configure AWS X-Ray to enable tracing for the EKS microservices. Query the trace data using Amazon Elasticsearch.
B: Migrate the containers to Amazon ECS and enable Amazon CloudWatch Container Insights. View the metrics and logs in the CloudWatch console.
C: Configure Amazon CloudWatch Container Insights in the existing EKS cluster. View the metrics and logs in the CloudWatch console.
D: Run the Amazon CloudWatch agent in the existing EKS cluster. View the metrics and logs in the CloudWatch console.
Explanation
Use CloudWatch Container Insights to collect, aggregate, and summarize metrics and logs from your containerized applications and microservices. Container Insights is available for Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), and Kubernetes platforms on Amazon EC2.
With Container Insights for EKS you can see the top contributors by memory or CPU, or the most recently active resources. This is available when you select any of the following dashboards in the drop-down box near the top of the page:
• ECS Services
• ECS Tasks
• EKS Namespaces
• EKS Services
• EKS Pods
CORRECT: “Configure Amazon CloudWatch Container Insights in the existing EKS cluster. View the metrics and logs in the CloudWatch console” is the correct answer.
INCORRECT: “Run the Amazon CloudWatch agent in the existing EKS cluster. View the metrics and logs in the CloudWatch console” is incorrect. Container Insights is the best way to view the required data.
INCORRECT: “Migrate the containers to Amazon ECS and enable Amazon CloudWatch Container Insights. View the metrics and logs in the CloudWatch console” is incorrect. There is no need to migrate containers to ECS as EKS is supported for Container Insights.
INCORRECT: “Configure AWS X-Ray to enable tracing for the EKS microservices. Query the trace data using Amazon Elasticsearch” is incorrect. X-Ray will not deliver the required statistics to a centralized dashboard.
Question 28:
A web application is running on a fleet of Amazon EC2 instances using an Auto Scaling Group. It is desired that the CPU usage in the fleet is kept at 40%.
How should scaling be configured?
A: Use a target tracking policy that keeps the average aggregate CPU utilization at 40%
B: Use a custom CloudWatch alarm to monitor CPU usage and notify the ASG using Amazon SNS
C: Use a simple scaling policy that launches instances when the average CPU hits 40%
D: Use a step scaling policy that uses the PercentChangeInCapacity value to adjust the group size as required
Explanation
This is a perfect use case for a target tracking scaling policy. With target tracking scaling policies, you select a scaling metric and set a target value. In this case you can just set the target value to 40% average aggregate CPU utilization.
CORRECT: “Use a target tracking policy that keeps the average aggregate CPU utilization at 40%” is the correct answer.
INCORRECT: “Use a simple scaling policy that launches instances when the average CPU hits 40%” is incorrect. A simple scaling policy will add instances when 40% CPU utilization is reached, but it is not designed to maintain 40% CPU utilization across the group.
INCORRECT: “Use a step scaling policy that uses the PercentChangeInCapacity value to adjust the group size as required” is incorrect. The step scaling policy makes scaling adjustments based on a number of factors. The PercentChangeInCapacity value increments or decrements the group size by a specified percentage. This does not relate to CPU utilization.
INCORRECT: “Use a custom CloudWatch alarm to monitor CPU usage and notify the ASG using Amazon SNS” is incorrect. You do not need to create a custom Amazon CloudWatch alarm as the ASG can scale using a policy based on CPU utilization using standard configuration.
Question 38:
A company is deploying an analytics application on AWS Fargate. The application requires connected storage that offers concurrent access to files and high performance.
Which storage option should the solutions architect recommend?
A: Create an Amazon S3 bucket for the application and establish an IAM role for Fargate to communicate with Amazon S3.
B: Create an Amazon FSx for Lustre file share and establish an IAM role that allows Fargate to communicate with FSx for Lustre.
C: Create an Amazon EBS volume for the application and establish an IAM role that allows Fargate to communicate with Amazon EBS.
D: Create an Amazon EFS file share and establish an IAM role that allows Fargate to communicate with Amazon EFS.
Explanation
The Amazon Elastic File System offers concurrent access to a shared file system and provides high performance. You can create file system policies for controlling access and then use an IAM role that is specified in the policy for access.
CORRECT: “Create an Amazon EFS file share and establish an IAM role that allows Fargate to communicate with Amazon EFS” is the correct answer.
INCORRECT: “Create an Amazon S3 bucket for the application and establish an IAM role for Fargate to communicate with Amazon S3” is incorrect. S3 uses a REST API not a file system API so access can be shared but is not concurrent.
INCORRECT: “Create an Amazon EBS volume for the application and establish an IAM role that allows Fargate to communicate with Amazon EBS” is incorrect. EBS volumes cannot be shared amongst Fargate tasks, they are used with EC2 instances.
INCORRECT: “Create an Amazon FSx for Lustre file share and establish an IAM role that allows Fargate to communicate with FSx for Lustre” is incorrect. It is not supported to connect Fargate to FSx for Lustre.
Question 39:
A company runs a business-critical application in the us-east-1 Region. The application uses an Amazon Aurora MySQL database cluster which is 2 TB in size. A Solutions Architect needs to determine a disaster recovery strategy for failover to the us-west-2 Region. The strategy must provide a recovery time objective (RTO) of 10 minutes and a recovery point objective (RPO) of 5 minutes.
Which strategy will meet these requirements?
A: Create a cross-Region Aurora MySQL read replica in us-west-2 Region. Configure an Amazon EventBridge rule that invokes an AWS Lambda function that promotes the read replica in us-west-2 when failure is detected.
B: Recreate the database as an Aurora global database with the primary DB cluster in us-east-1 and a secondary DB cluster in us-west-2. Use an Amazon EventBridge rule that invokes an AWS Lambda function to promote the DB cluster in us-west-2 when failure is detected.
C: Recreate the database as an Aurora multi master cluster across the us-east-1 and us-west-2 Regions with multiple writers to allow read/write capabilities from all database instances.
D: Create a multi-Region Aurora MySQL DB cluster in us-east-1 and us-west-2. Use an Amazon Route 53 health check to monitor us-east-1 and fail over to us-west-2 upon failure.
Explanation
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages
If your primary region suffers a performance degradation or outage, you can promote one of the secondary regions to take read/write responsibilities. An Aurora cluster can recover in less than 1 minute even in the event of a complete regional outage.
This provides your application with an effective Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute, providing a strong foundation for a global business continuity plan
CORRECT: “Recreate the database as an Aurora global database with the primary DB cluster in us-east-1 and a secondary DB cluster in us-west-2. Use an Amazon EventBridge rule that invokes an AWS Lambda function to promote the DB cluster in us-west-2 when failure is detected” is the correct answer.
INCORRECT: “Create a multi-Region Aurora MySQL DB cluster in us-east-1 and us-west-2. Use an Amazon Route 53 health check to monitor us-east-1 and fail over to us-west-2 upon failure” is incorrect. You cannot create a multi-Region Aurora MySQL DB cluster. Options are to create MySQL Replicas (may meet the RTO objectives), or to use global database.
INCORRECT: “Create a cross-Region Aurora MySQL read replica in us-west-2 Region. Configure an Amazon EventBridge rule that invokes an AWS Lambda function that promotes the read replica in us-west-2 when failure is detected” is incorrect. This may not meet the RTO objectives as large databases may well take more than 10 minutes to promote.
INCORRECT: “Recreate the database as an Aurora multi master cluster across the us-east-1 and us-west-2 Regions with multiple writers to allow read/write capabilities from all database instances” is incorrect. Multi master only works within a Region it does not work across Regions.
Question 51:
An application allows users to upload and download files. Files older than 2 years will be accessed less frequently. A solutions architect needs to ensure that the application can scale to any number of files while maintaining high availability and durability.
Which scalable solutions should the solutions architect recommend?
A: Store the files in Amazon Elastic Block Store (EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data older than 2 years
B: Store the files in Amazon Elastic Block Store (EBS) volumes. Create a lifecycle policy to move files older than 2 years to Amazon S3 Glacier
C: Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard Infrequent Access (S3 Standard-IA)
D: Store the files on Amazon Elastic File System (EFS) with a lifecycle policy that moves objects older than 2 years to EFS Infrequent Access (EFS IA)
Explanation
S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files.
CORRECT: “Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard Infrequent Access (S3 Standard-IA)” is the correct answer.
INCORRECT: “Store the files on Amazon Elastic File System (EFS) with a lifecycle policy that moves objects older than 2 years to EFS Infrequent Access (EFS IA)” is incorrect. With EFS you can transition files to EFS IA after a file has not been accessed for a specified period of time with options up to 90 days. You cannot transition based on an age of 2 years.
INCORRECT: “Store the files in Amazon Elastic Block Store (EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data older than 2 years” is incorrect. You cannot identify the age of data and archive snapshots in this way with EBS.
INCORRECT: “Store the files in Amazon Elastic Block Store (EBS) volumes. Create a lifecycle policy to move files older than 2 years to Amazon S3 Glacier” is incorrect. You cannot archive files from an EBS volume to Glacier using lifecycle policies.
Question 52:
A company is planning to migrate a large quantity of important data to Amazon S3. The data will be uploaded to a versioning enabled bucket in the us-west-1 Region. The solution needs to include replication of the data to another Region for disaster recovery purposes.
How should a solutions architect configure the replication?
A: Create an additional S3 bucket with versioning in another Region and configure cross-Region replication
B: Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS)
C: Create an additional S3 bucket in another Region and configure cross-Region replication
D: Create an additional S3 bucket with versioning in another Region and configure cross-origin resource sharing (CORS)
Explanation
Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can copy objects between different AWS Regions or within the same Region. Both source and destination buckets must have versioning enabled.
CORRECT: “Create an additional S3 bucket with versioning in another Region and configure cross-Region replication” is the correct answer.
INCORRECT: “Create an additional S3 bucket in another Region and configure cross-Region replication” is incorrect as the destination bucket must also have versioning enabled.
INCORRECT: “Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS)” is incorrect as CORS is not related to replication.
INCORRECT: “Create an additional S3 bucket with versioning in another Region and configure cross-origin resource sharing (CORS)” is incorrect as CORS is not related to replication.
Question 58:
An application runs on Amazon EC2 Linux instances. The application generates log files which are written using standard API calls. A storage solution is required that can be used to store the files indefinitely and must allow concurrent access to all files.
Which storage service meets these requirements and is the MOST cost-effective?
A: Amazon S3
B: Amazon EBS
C: Amazon EFS
D: Amazon EC2 instance store
Explanation
The application is writing the files using API calls which means it will be compatible with Amazon S3 which uses a REST API. S3 is a massively scalable key-based object store that is well-suited to allowing concurrent access to the files from many instances.
Amazon S3 will also be the most cost-effective choice. A rough calculation using the AWS pricing calculator shows the cost differences between 1TB of storage on EBS, EFS, and S3 Standard.
CORRECT: “Amazon S3” is the correct answer.
INCORRECT: “Amazon EFS” is incorrect as though this does offer concurrent access from many EC2 Linux instances, it is not the most cost-effective solution.
INCORRECT: “Amazon EBS” is incorrect. The Elastic Block Store (EBS) is not a good solution for concurrent access from many EC2 instances and is not the most cost-effective option either. EBS volumes are mounted to a single instance except when using multi-attach which is a new feature and has several constraints.
INCORRECT: “Amazon EC2 instance store” is incorrect as this is an ephemeral storage solution which means the data is lost when powered down. Therefore, this is not an option for long-term data storage.
Question 59:
A company has created an application that stores sales performance data in an Amazon DynamoDB table. A web application is being created to display the data. A Solutions Architect must design the web application using managed services that require minimal operational maintenance.
Which architectures meet these requirements? (Select TWO.)
A: An Amazon API Gateway REST API invokes an AWS Lambda function. The Lambda function reads data from the DynamoDB table.
B: An Elastic Load Balancer forwards requests to a target group of Amazon EC2 instances. The EC2 instances run an application that reads data from the DynamoDB table.
C: An Elastic Load Balancer forwards requests to a target group with the DynamoDB table configured as the target.
D: An Amazon API Gateway REST API directly accesses the sales performance data in the DynamoDB table.
E: An Amazon Route 53 hosted zone routes requests to an AWS Lambda endpoint to invoke a Lambda function that reads data from the DynamoDB table.
Explanation
There are two architectures here that fulfill the requirement to create a web application that displays the data from the DynamoDB table.
The first one is to use an API Gateway REST API that invokes an AWS Lambda function. A Lambda proxy integration can be used, and this will proxy the API requests to the Lambda function which processes the request and accesses the DynamoDB table.
The second option is to use an API Gateway REST API to directly access the sales performance data. In this case a proxy for the DynamoDB query API can be created using a method in the REST API.
CORRECT: “An Amazon API Gateway REST API invokes an AWS Lambda function. The Lambda function reads data from the DynamoDB table” is a correct answer.
CORRECT: “An Amazon API Gateway REST API directly accesses the sales performance data in the DynamoDB table” is also a correct answer.
INCORRECT: “An Amazon Route 53 hosted zone routes requests to an AWS Lambda endpoint to invoke a Lambda function that reads data from the DynamoDB table” is incorrect. An Alias record could be created in a hosted zone but a hosted zone itself does not route to a Lambda endpoint. Using an Alias, it is possible to route to a VPC endpoint that uses a Lambda function however there would not be a web front end so a REST API would be preferable.
INCORRECT: “An Elastic Load Balancer forwards requests to a target group with the DynamoDB table configured as the target” is incorrect. You cannot configure DynamoDB as a target in a target group.
INCORRECT: “An Elastic Load Balancer forwards requests to a target group of Amazon EC2 instances. The EC2 instances run an application that reads data from the DynamoDB table” is incorrect. This would not offer low operational maintenance as you must manage the EC2 instances.
Question 64:
A company hosts statistical data in an Amazon S3 bucket that users around the world download from their website using a URL that resolves to a domain name. The company needs to provide low latency access to users and plans to use Amazon Route 53 for hosting DNS records.
Which solution meets these requirements?
A: Create an A record in Route 53, use a Route 53 traffic policy for the web application, and configure a geolocation rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy.
B: Create a web distribution on Amazon CloudFront pointing to an Amazon S3 origin. Create an ALIAS record in the Amazon Route 53 hosted zone that points to the CloudFront distribution, resolving to the application’s URL domain name.
C: Create a web distribution on Amazon CloudFront pointing to an Amazon S3 origin. Create a CNAME record in a Route 53 hosted zone that points to the CloudFront distribution, resolving to the application’s URL domain name.
D: Create an A record in Route 53, use a Route 53 traffic policy for the web application, and configure a geoproximity rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy.
Explanation
This is a simple requirement for low latency access to the contents of an Amazon S3 bucket for global users. The best solution here is to use Amazon CloudFront to cache the content in Edge Locations around the world. This involves creating a web distribution that points to an S3 origin (the bucket) and then create an Alias record in Route 53 that resolves the applications URL to the CloudFront distribution endpoint.
CORRECT: “Create a web distribution on Amazon CloudFront pointing to an Amazon S3 origin. Create an ALIAS record in the Amazon Route 53 hosted zone that points to the CloudFront distribution, resolving to the application’s URL domain name” is the correct answer.
INCORRECT: “Create a web distribution on Amazon CloudFront pointing to an Amazon S3 origin. Create a CNAME record in a Route 53 hosted zone that points to the CloudFront distribution, resolving to the application’s URL domain name” is incorrect. An Alias record should be used to point to an Amazon CloudFront distribution.
INCORRECT: “Create an A record in Route 53, use a Route 53 traffic policy for the web application, and configure a geolocation rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy” is incorrect. There is only a single endpoint (the Amazon S3 bucket) so this strategy would not work. Much better to use CloudFront to cache in multiple locations.
INCORRECT: “Create an A record in Route 53, use a Route 53 traffic policy for the web application, and configure a geoproximity rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy” is incorrect. Again, there is only one endpoint so this strategy will simply not work.
Question 65:
A gaming company collects real-time data and stores it in an on-premises database system. The company are migrating to AWS and need better performance for the database. A solutions architect has been asked to recommend an in-memory database that supports data replication.
Which database should a solutions architect recommend?
A: Amazon ElastiCache for Redis
B: Amazon RDS for MySQL
C: Amazon RDS for PostgreSQL
D: Amazon ElastiCache for Memcached
Explanation
Amazon ElastiCache is an in-memory database. With ElastiCache Memcached there is no data replication or high availability. As you can see in the diagram, each node is a separate partition of data:
Therefore, the Redis engine must be used which does support both data replication and clustering. The following diagram shows a Redis architecture with cluster mode enabled:
CORRECT: “Amazon ElastiCache for Redis” is the correct answer.
INCORRECT: “Amazon ElastiCache for Memcached” is incorrect as Memcached does not support data replication or high availability.
INCORRECT: “Amazon RDS for MySQL” is incorrect as this is not an in-memory database.
INCORRECT: “Amazon RDS for PostgreSQL” is incorrect as this is not an in-memory database.