Neal Davis - Practice Test 4 - Correct Flashcards
Question 1:
A company is migrating an application that comprises a web tier and a MySQL database into the AWS Cloud. The web tier will run on EC2 instances, and the database tier will run on an Amazon RDS for MySQL DB instance. Customers access the application via the Internet using dynamic IP addresses.
How should the Solutions Architect configure the security groups to enable connectivity to the application?
A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0. Configure the security group for the DB tier to allow inbound traffic on port 3306 from 0.0.0.0/0.
B. Configure the security group for the web tier to allow inbound traffic on port 443 from the IP addresses of the customers. Configure the security group for the DB tier to allow inbound traffic on port 3306 from the web tier security group.
C. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0. Configure the security group for the DB tier to allow inbound traffic on port 3306 from the web tier security group.
D. Configure the security group for the web tier to allow inbound traffic on port 443 from the IP addresses of the customers. Configure the security group for the DB tier to allow inbound traffic on port 3306 from the IP addresses of the customers.
Explanation
The customers are connecting from dynamic IP addresses so we must assume they will be changing regularly. Therefore, it is not possible to restrict access from the IP addresses of the customers. The security group for the web tier must allow 443 (HTTPS) from 0.0.0.0/0, which means any IP source IP address.
For the database tier, this can best be secured by restricting access to the web tier security group. The port required to be opened is 3306 for MySQL.
CORRECT: “Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0. Configure the security group for the DB tier to allow inbound traffic on port 3306 from the web tier security group” is the correct answer.
INCORRECT: “Configure the security group for the web tier to allow inbound traffic on port 443 from the IP addresses of the customers. Configure the security group for the DB tier to allow inbound traffic on port 3306 from the IP addresses of the customers” is incorrect.
The customer IP addresses are dynamic, so it is not possible to restrict access using IP addresses. Access to the DB tier should be restricted to the web tier, there is no need to enable end-user access.
INCORRECT: “Configure the security group for the web tier to allow inbound traffic on port 443 from the IP addresses of the customers. Configure the security group for the DB tier to allow inbound traffic on port 3306 from the web tier security group” is incorrect.
The customer IP addresses are dynamic, so it is not possible to restrict access using IP addresses.
INCORRECT: “Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0. Configure the security group for the DB tier to allow inbound traffic on port 3306 from 0.0.0.0/0” is incorrect.
Access to the DB tier should be restricted to the web tier, there is no need to enable access from the internet.
Question 6:
A telecommunications company is looking to expand its 5G coverage nationwide, and as a result needs to provision and build their own private cellular network with the help of AWS.
Which solution does AWS provide to help with this?
A. AWS CloudHSM
B. AWS Outposts
C. AWS Wavelength
D. AWS Private 5G
Explanation
AWS Private 5G is a managed service that makes it easy to deploy, operate, and scale your own private cellular network, with all required hardware and software provided by AWS.
CORRECT: “AWS Private 5G” is the correct answer (as explained above.)
INCORRECT: “AWS Wavelength” is incorrect. AWS Wavelength embeds AWS compute and storage services within 5G networks, providing mobile edge computing infrastructure for developing, deploying, and scaling ultra-low-latency applications.
INCORRECT: “AWS CloudHSM” is incorrect. AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud and has nothing to do with 5G.
INCORRECT: “AWS Outposts” is incorrect. AWS Outposts is a family of fully managed solutions delivering AWS infrastructure and services to virtually any on-premises or edge location for a truly consistent hybrid experience. It is not related to 5G.
Question 8:
To increase performance and redundancy for an application a company has decided to run multiple implementations in different AWS Regions behind network load balancers. The company currently advertise the application using two public IP addresses from separate /24 address ranges and would prefer not to change these. Users should be directed to the closest available application endpoint.
Which actions should a solutions architect take? (Select TWO.)
A. Create an AWS Global Accelerator and attach endpoints in each AWS Region
B. Create an Amazon Route 53 geolocation based routing policy
C. Create PTR records to map existing public IP addresses to an Alias
D. Assign new static anycast IP addresses and modify any existing pointers
E. Migrate both public IP addresses to the AWS Global Accelerator
Explanation
AWS Global Accelerator uses static IP addresses as fixed entry points for your application. You can migrate up to two /24 IPv4 address ranges and choose which /32 IP addresses to use when you create your accelerator.
This solution ensures the company can continue using the same IP addresses and they are able to direct traffic to the application endpoint in the AWS Region closest to the end user. Traffic is sent over the AWS global network for consistent performance.
CORRECT: “Create an AWS Global Accelerator and attach endpoints in each AWS Region” is a correct answer.
CORRECT: “Migrate both public IP addresses to the AWS Global Accelerator” is also a correct answer.
INCORRECT: “Create an Amazon Route 53 geolocation based routing policy” is incorrect. With this solution new IP addresses will be required as there will be application endpoints in different regions.
INCORRECT: “Assign new static anycast IP addresses and modify any existing pointers” is incorrect. This is unnecessary as you can bring your own IP addresses to AWS Global Accelerator and this is preferred in this scenario.
INCORRECT: “Create PTR records to map existing public IP addresses to an Alias” is incorrect. This is not a workable solution for mapping existing IP addresses to an Amazon Route 53 Alias.
Question 12:
A company has two accounts in an AWS Organization. The accounts are: Prod1 and Prod2. An Amazon RDS database runs in the Prod1 account. Amazon EC2 instances run in the Prod2 account. The EC2 instances in the Prod2 account must access the RDS database.
How can a Solutions Architect meet this requirement MOST cost-effectively?
A. Create an AWS Lambda function in the Prod1 account to transfer data to the Amazon EC2 instances in the Prod2 account.
B. Set up VPC sharing with the Prod1 account as the owner and the Prod2 account as the participant to transfer the data.
C. Take a snapshot of the Amazon RDS database and share it with the Prod2 account. In the Prod2 account, restore the cluster using the shared snapshot.
D. Create a cross-Region Replica of the Amazon RD database in the Prod2 account. Point the EC2 instances to the Replica endpoint.
Explanation
VPC sharing makes use of the AWS Resource Access Manager (AWS RAM) service. It enables the sharing of VPCs across accounts. In this model, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations.
This scenario could be implemented with Prod1 account as the VPC owner and the Prod2 account as a VPC participant. This would allow the central control of the shared resource whilst enabling the EC2 instances in Prod2 to access the database.
CORRECT: “Set up VPC sharing with the Prod1 account as the owner and the Prod2 account as the participant to transfer the data” is the correct answer.
INCORRECT: “Create an AWS Lambda function in the Prod1 account to transfer data to the Amazon EC2 instances in the Prod2 account” is incorrect. The question is not asking for transfer of data; the EC2 instances need to access the database. Therefore, a method of connecting to a database endpoint is required.
INCORRECT: “Create a cross-Region Replica of the Amazon RD database in the Prod2 account. Point the EC2 instances to the Replica endpoint” is incorrect. You cannot create cross-Region replicas of RDS databases in different accounts.
INCORRECT: “Take a snapshot of the Amazon RDS database and share it with the Prod2 account. In the Prod2 account, restore the cluster using the shared snapshot” is incorrect. This is less cost-effective as there is now a second RDS database running.
Question 18:
A financial services company provides users with downloadable reports in PDF format. The company requires a solution that can seamlessly scale to meet the demands of a growing, global user base. The solution must be cost-effective and minimize operational overhead.
Which combination of services should a Solutions Architect recommend to meet these requirements?
A. Amazon CloudFront and Amazon S3.
B. AWS Lambda and Amazon DynamoDB.
C. Amazon Route 53 with Network Load Balancers.
D. Application Load Balancer with AWS Auto Scaling.
Explanation
The most cost-effective option is to use Amazon S3 for storing the PDF files and Amazon CloudFront for caching the files around the world in edge locations. This combination of services will provide seamless scalability and is cost-effective. This is also a serverless solution so operational overhead is minimized.
CORRECT: “Amazon CloudFront and Amazon S3” is the correct answer.
INCORRECT: “AWS Lambda and Amazon DynamoDB” is incorrect. AWS Lambda can be used to process requests and serve traffic from DynamoDB. However, a front end like API Gateway may be required and DynamoDB would be less cost-effective compared to using S3.
INCORRECT: “Application Load Balancer with AWS Auto Scaling” is incorrect. This would use Amazon EC2 instances and load balancers which is more expensive.
INCORRECT: “Amazon Route 53 with Network Load Balancers” is incorrect. This would use Amazon EC2 instances and load balancers which is more expensive.
Question 30:
A dynamic website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). Users are distributed around the world, and many are reporting poor website performance. The company uses Amazon Route 53 for DNS.
Which set of actions will improve website performance while minimizing cost?
A. Create a latency-based Amazon Route 53 record for the ALB. Then launch new EC2 instances with larger instance sizes and register the instances with the ALB.
B. Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution.
C. Launch new EC2 instances running the website and ALBs in different Regions. Use AWS Global Accelerator to direct connections to the closest Region.
D. Host the website in an Amazon S3 bucket and delete the ALB and EC2 instances. Enable transfer acceleration and update the Amazon Route 53 record to point to the S3 bucket.
Explanation
The most cost-effective option for improving performance is to create an Amazon CloudFront distribution. CloudFront can be used to serve both static and dynamic content. This solution will ensure that wherever users are located they will experience improved performance due to the caching of content and the usage of the AWS global network.
CORRECT: “Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution” is the correct answer.
INCORRECT: “Launch new EC2 instances running the website and ALBs in different Regions. Use AWS Global Accelerator to direct connections to the closest Region” is incorrect. This is a more costly solution as there are more EC2 instances, ALBs, and Global Accelerator. Using CloudFront would be a better solution for this use case.
INCORRECT: “Create a latency-based Amazon Route 53 record for the ALB. Then launch new EC2 instances with larger instance sizes and register the instances with the ALB” is incorrect. With only one ALB latency-based record serves no purpose. Additionally, using larger instances sizes may not assist as it does not reduce latency for global users.
INCORRECT: “Host the website in an Amazon S3 bucket and delete the ALB and EC2 instances. Enable transfer acceleration and update the Amazon Route 53 record to point to the S3 bucket” is incorrect. Transfer acceleration offers performance benefits for uploading and downloading content to/from S3 buckets but the S3 bucket can only serve static content, not a dynamic website.
Question 50:
A Solutions Architect needs to select a low-cost, short-term option for adding resilience to an AWS Direct Connect connection. What is the MOST cost-effective solution to provide a backup for the Direct Connect connection?
A. Configure AWS Transit Gateway with an IPSec VPN backup
B. Configure an IPSec VPN connection over the Direct Connect link
C. Implement a second AWS Direct Connection
D. Implement an IPSec VPN connection and use the same BGP prefix
Explanation
This is the most cost-effective solution. With this option both the Direct Connect connection and IPSec VPN are active and being advertised using the Border Gateway Protocol (BGP). The Direct Connect link will always be preferred unless it is unavailable.
CORRECT: “Implement an IPSec VPN connection and use the same BGP prefix” is the correct answer.
INCORRECT: “Implement a second AWS Direct Connection” is incorrect. This is not a short-term or low-cost option as it takes time to implement and is costly.
INCORRECT: “Configure AWS Transit Gateway with an IPSec VPN backup” is incorrect. This is a workable solution and provides some advantages. However, you do need to pay for the Transit Gateway so it is not the most cost-effective option and probably not suitable for a short-term need.
INCORRECT: “Configure an IPSec VPN connection over the Direct Connect link” is incorrect. This is not a solution to the problem as the VPN connection is going over the Direct Connect link. This is something you might do to add encryption to Direct Connect but it doesn’t make it more resilient.
Question 51:
Three Amazon VPCs are used by a company in the same region. The company has two AWS Direct Connect connections to two separate company offices and wishes to share these with all three VPCs. A Solutions Architect has created an AWS Direct Connect gateway. How can the required connectivity be configured?
A. Associate the Direct Connect gateway to a transit gateway
B. Create a VPC peering connection between the VPCs and route entries for the Direct Connect Gateway
C. Create a transit virtual interface between the Direct Connect gateway and each VPC
D. Associate the Direct Connect gateway to a virtual private gateway in each VPC
Explanation
You can manage a single connection for multiple VPCs or VPNs that are in the same Region by associating a Direct Connect gateway to a transit gateway. The solution involves the following components:
- A transit gateway that has VPC attachments.
- A Direct Connect gateway.
- An association between the Direct Connect gateway and the transit gateway.
- A transit virtual interface that is attached to the Direct Connect gateway.
The following diagram depicts this configuration:
CORRECT: “Associate the Direct Connect gateway to a transit gateway” is the correct answer.
INCORRECT: “Associate the Direct Connect gateway to a virtual private gateway in each VPC” is incorrect. For VPCs in the same region a VPG is not necessary. A transit gateway can instead be configured.
INCORRECT: “Create a VPC peering connection between the VPCs and route entries for the Direct Connect Gateway” is incorrect. You cannot add route entries for a Direct Connect gateway to each VPC and enable routing. Use a transit gateway instead.
INCORRECT: “Create a transit virtual interface between the Direct Connect gateway and each VPC” is incorrect. The transit virtual interface is attached to the Direct Connect gateway on the connection side, not the VPC/transit gateway side.
Question 61:
A web application hosts static and dynamic content. The application runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The database tier runs on an Amazon Aurora database. A Solutions Architect needs to make the application more resilient to periodic increases in request rates.
Which architecture should the Solutions Architect implement? (Select TWO.)
A. Add an Amazon CloudFront distribution.
B. Add an AWS Transit Gateway.
C. Add an AWS Direct Connect link.
D. Add Aurora Replicas.
E. Add an AWS Global Accelerator.
Explanation
Using an Amazon CloudFront distribution can help reduce the impact of increases in requests rates as content is cached at edge locations and delivered via the AWS global network. For the database layer, Aurora Replicas will assist with serving read requests which reduces the load on the main database instance.
CORRECT: “Add Aurora Replicas” is a correct answer.
CORRECT: “Add an Amazon CloudFront distribution” is also a correct answer.
INCORRECT: “Add an AWS Transit Gateway” is incorrect. This service offers no value in this situation.
INCORRECT: “Add an AWS Direct Connect link” is incorrect. This would only improve network performance for users connecting from an on-premises location.
INCORRECT: “Add an AWS Global Accelerator” is incorrect. CloudFront is better suited to this use case as it caches static content and improves performance for dynamic content.
Question 2:
A company runs a streaming application on AWS that ingests data in near real-time and then processes the data. The data processing takes 30 minutes to complete. As the volume of data being ingested by the application has increased, high latency has occurred. A Solutions Architect needs to design a scalable and serverless solution to improve performance.
Which combination of steps should the Solutions Architect take? (Select TWO.)
A. Use Amazon Kinesis Data Firehose to ingest the data.
B. Use AWS Lambda with AWS Step Functions to process the data.
C. Use containers running on AWS Fargate to process the data.
D. Use Amazon EC2 instances in a placement group to process the data.
E. Use Amazon Simple Queue Service (SQS) to ingest the data.
Explanation
The application is a streaming application that ingests near real time data. This is a good fit for Amazon Kinesis Data Firehose which can ingest data and load it directly to a data store where it can be subsequently processed. We then need a serverless solution for processing the data. AWS Fargate is a serverless service that uses Amazon ECS for running Docker containers on AWS.
This solution will seamlessly scale for the data ingestion and processing. It is also fully serverless.
CORRECT: “Use Amazon Kinesis Data Firehose to ingest the data” is a correct answer.
CORRECT: “Use containers running on AWS Fargate to process the data” is also a correct answer.
INCORRECT: “Use AWS Lambda with AWS Step Functions to process the data” is incorrect. Lambda has a maximum execution time of 900 seconds (15 minutes), so it is not possible to use AWS Lambda functions for processing the data.
INCORRECT: “Use Amazon Simple Queue Service (SQS) to ingest the data” is incorrect. SQS does not ingest data, you must use an application process to place messages in the queue and then another process to consumer and process the messages from the queue.
INCORRECT: “Use Amazon EC2 instances in a placement group to process the data” is incorrect. A placement group with EC2 instances is not a serverless solution as you must manage the EC2 instances.
Question 27:
The log files of a proprietary application must be analyzed. The log files are stored in an Amazon S3 bucket in JSON format. Query execution will be on-demand and simple. It is essential for a solutions architect to perform the analysis with minimal changes to the existing architecture.
How can a solutions architect meet these requirements with the LOWEST amount of operational overhead?
A. Use Amazon Redshift to place all the content in one place and run the SQL queries as and when required.
B. Use Amazon CloudWatch Logs for log storage. Run SQL queries on demand from the Amazon CloudWatch console.
C. Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries on demand.
D. Use Amazon Athena to query and analyze the data in Amazon S3 using standard SQL queries on demand.
Explanation
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. The Solutions Architect could easily use Amazon Athena to query the logs on demand without refactoring any other parts of the application.
CORRECT: “Use Amazon Athena to query and analyze the data in Amazon S3 using standard SQL queries on demand” is the correct answer (as explained above.)
INCORRECT: “Use Amazon Redshift to place all the content in one place and run the SQL queries as and when required” is incorrect. This would take a significant amount of refactoring by moving all the application log data into Amazon RedShift.
INCORRECT: “Use Amazon CloudWatch Logs for log storage. Run SQL queries on demand from the Amazon CloudWatch console” is incorrect. Though you can use CloudWatch Logs Insights to run queries on log files, these are not SQL queries, and this is not an efficient solution as it will require a lot of refactoring.
INCORRECT: “Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries on demand” is incorrect. This would take a significant amount of refactoring by moving all the application log data into AWS Glue and using an EMR cluster to analyze the logs.
Question 3:
A company is deploying an Amazon ElastiCache for Redis cluster. To enhance security a password should be required to access the database. What should the solutions architect use?
A. VPC Security Group
B. Redis AUTH command
C. AWS IAM Policy
D. AWS Directory Service
Explanation
Redis authentication tokens enable Redis to require a token (password) before allowing clients to execute commands, thereby improving data security.
You can require that users enter a token on a token-protected Redis server. To do this, include the parameter –auth-token (API: AuthToken) with the correct token when you create your replication group or cluster. Also include it in all subsequent commands to the replication group or cluster.
CORRECT: “Redis AUTH command” is the correct answer.
INCORRECT: “AWS Directory Service” is incorrect. This is a managed Microsoft Active Directory service and cannot add password protection to Redis.
INCORRECT: “AWS IAM Policy” is incorrect. You cannot use an IAM policy to enforce a password on Redis.
INCORRECT: “VPC Security Group” is incorrect. A security group protects at the network layer, it does not affect application authentication.
Question 5:
A DevOps team uses an Amazon RDS MySQL database running for running resource-intensive tests each month. The instance has Performance Insights enabled and is only used once a month for up to 48 hours. As part of an effort to reduce AWS spend, the team wants to reduce the cost of running the tests without reducing the memory and compute attributes of the DB instance.
Which solution meets these requirements MOST cost-effectively?
A. Modify the DB instance size to a smaller capacity instance when all the tests have been completed. Scale up again when required.
B. Create a snapshot of the database when the tests are completed. Terminate the DB instance. Create a new DB instance from the snapshot when required.
C. Create an Auto Scaling group for the DB instance and reduce the desired capacity to 0 once the tests are completed.
D. Stop the DB instance once all tests are completed. Start the DB instance again when required.
Explanation
Taking a snapshot of the instance and storing the snapshot is the most cost-effective solution. When needed, a new database can be created from the snapshot. Performance Insights can be enabled on the new instance if needed. Note that the previous data from Performance Insights will not be associated with the new instance, however this was not a requirement.
CORRECT: “Create a snapshot of the database when the tests are completed. Terminate the DB instance. Create a new DB instance from the snapshot when required” is the correct answer (as explained above.)
INCORRECT: “Stop the DB instance once all tests are completed. Start the DB instance again when required” is incorrect. You will be charged when your instance is stopped. When an instance is stopped you are charged for provisioned storage, manual snapshots, and automated backup storage within your specified retention window, but not for database instance hours. This is more costly compared to using snapshots.
INCORRECT: “Create an Auto Scaling group for the DB instance and reduce the desired capacity to 0 once the tests are completed” is incorrect. You cannot use Auto Scaling groups with Amazon RDS instances.
INCORRECT: “Modify the DB instance size to a smaller capacity instance when all the tests have been completed. Scale up again when required” is incorrect. This will reduce compute and memory capacity and will be more costly than taking a snapshot and terminating the DB.
Question 14:
An application is deployed using Amazon EC2 instances behind an Application Load Balancer running in an Auto Scaling group. The EC2 instances connect to an Amazon RDS database. When running performance testing on the application latency was experienced when performing queries on the database. The Amazon CloudWatch metrics for the EC2 instances do not show any performance issues.
How can a Solutions Architect resolve the application latency issues?
A. Add read replicas for the RDS database and direct read traffic to the replicas.
B. Enable Multi-AZ for the RDS database and direct read traffic to the standby.
C. Replace the EC2 instances with AWS Lambda functions.
D. Replace the Application Load Balancer with a Network Load Balancer.
Explanation
The latency is most likely due to the RDS database having insufficient resources to handle the load. This can be resolved by deploying a read replica and directing queries to the replica endpoint. This offloads the performance hit of the queries from the master database which will improve overall performance and reduce the latency associated with database queries.
CORRECT: “Add read replicas for the RDS database and direct read traffic to the replicas” is the correct answer.
INCORRECT: “Replace the EC2 instances with AWS Lambda functions” is incorrect. If the latency is being caused by the database layer, then this will not resolve the issues.
INCORRECT: “Replace the Application Load Balancer with a Network Load Balancer” is incorrect. If the latency is being caused by the database layer, then this will not resolve the issues.
INCORRECT: “Enable Multi-AZ for the RDS database and direct read traffic to the standby” is incorrect. You cannot read from the standby in an Amazon RDS database cluster (you can with Aurora though).
Question 25:
A Solutions Architect is migrating a distributed application from their on-premises environment into AWS. This application consists of an Apache Cassandra NoSQL database, with a containerized SUSE Linux compute layer with an additional storage layer made up of multiple Microsoft SQL Server databases. Once in the cloud the company wants to have as little operational overhead as possible, with no schema conversion during the migration and the company wants to host the architecture in a highly available and durable way.
Which of the following groups of services will provide the solutions architect with the best solution ?
A. Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
B. Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on EC2. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
C. Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate. Use Amazon Aurora to host the second storage layer.
D. Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
Explanation
Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service. This combined with a containerized, serverless compute layer on Amazon ECS for Fargate and a RDS for Microsoft SQL Server database layer is a fully managed version of what currently exists on premises.
CORRECT: “Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer” is the correct answer (as explained above.)
INCORRECT: “Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on EC2. Use Amazon RDS for Microsoft SQL Server to host the second storage layer” is incorrect. DynamoDB is not a managed version of DynamoDB therefore it is not the correct answer.
INCORRECT: “Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer” is incorrect. DynamoDB is not a managed version of DynamoDB therefore it is not the correct answer.
INCORRECT: “Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate. Use Amazon Aurora to host the second storage layer” is incorrect. Amazon Aurora does not have an option to run a Microsoft SQL Server database, therefore this answer is not correct.
Question 38:
An eCommerce company has a very popular web application that receives a large amount of traffic. The application must store customer profile data and shopping cart information in a database. A Solutions Architect must design the database solution to support peak loads of several million requests per second and millisecond response times. Operational overhead must be minimized, and scaling should not cause downtime.
Which database solution should the Solutions Architect recommend?
A. Amazon RDS
B. Amazon DynamoDB
C. Amazon Athena
D. Amazon Aurora
Explanation
Amazon DynamoDB is a non-relational database that is managed for you. It can scale without downtime and with minimal operational overhead. DynamoDB can support the request rates and response times required by this solution and is often used in eCommerce solutions and for session state use cases.
CORRECT: “Amazon DynamoDB” is the correct answer.
INCORRECT: “Amazon Aurora” is incorrect. Aurora will require downtime to scale as you must change the instance type.
INCORRECT: “Amazon RDS” is incorrect. RDS will require downtime to scale as you must change the instance type.
INCORRECT: “Amazon Athena” is incorrect. Athena is used for querying data in a data lake, it is not used for storing this type of information in a transactional database model.
Question 43:
An application in a private subnet needs to query data in an Amazon DynamoDB table. Use of the DynamoDB public endpoints must be avoided. What is the most EFFICIENT and secure method of enabling access to the table?
A. Create a software VPN between DynamoDB and the application in the private subnet
B. Create a gateway VPC endpoint and add an entry to the route table
C. Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI)
D. Create a private Amazon DynamoDB endpoint and connect to it using an AWS VPN
Explanation
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
With a gateway endpoint you configure your route table to point to the endpoint. Amazon S3 and DynamoDB use gateway endpoints.
The table below helps you to understand the key differences between the two different types of VPC endpoint:
CORRECT: “Create a gateway VPC endpoint and add an entry to the route table” is the correct answer.
INCORRECT: “Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI)” is incorrect. This would be used for services that are supported by interface endpoints, not gateway endpoints.
INCORRECT: “Create a private Amazon DynamoDB endpoint and connect to it using an AWS VPN” is incorrect. You cannot create an Amazon DynamoDB private endpoint and connect to it over VPN. Private endpoints are VPC endpoints and are connected to by instances in subnets via route table entries or via ENIs (depending on which service).
INCORRECT: “Create a software VPN between DynamoDB and the application in the private subnet” is incorrect. You cannot create a software VPN between DynamoDB and an application.
Question 49:
Over 500 TB of data must be analyzed using standard SQL business intelligence tools. The dataset consists of a combination of structured data and unstructured data. The unstructured data is small and stored on Amazon S3. Which AWS services are most suitable for performing analytics on the data?
A. Amazon DynamoDB with Amazon DynamoDB Accelerator (DAX)
B. Amazon RDS MariaDB with Amazon Athena
C. Amazon ElastiCache for Redis with cluster mode enabled
D. Amazon Redshift with Amazon Redshift Spectrum
Explanation
Amazon Redshift is an enterprise-level, petabyte scale, fully managed data warehousing service. An Amazon Redshift data warehouse is an enterprise-class relational database query and management system. Redshift supports client connections with many types of applications, including business intelligence (BI), reporting, data, and analytics tools.
Using Amazon Redshift Spectrum, you can efficiently query and retrieve structured and semistructured data from files in Amazon S3 without having to load the data into Amazon Redshift tables. Redshift Spectrum queries employ massive parallelism to execute very fast against large datasets.
Used together, RedShift and RedShift spectrum are suitable for running massive analytics jobs on both the structured (RedShift data warehouse) and unstructured (Amazon S3) data.
CORRECT: “Amazon Redshift with Amazon Redshift Spectrum” is the correct answer.
INCORRECT: “Amazon RDS MariaDB with Amazon Athena” is incorrect. Amazon RDS is not suitable for analytics (OLAP) use cases as it is designed for transactional (OLTP) use cases. Athena can however be used for running SQL queries on data on S3.
INCORRECT: “Amazon DynamoDB with Amazon DynamoDB Accelerator (DAX)” is incorrect. This is an example of a non-relational DB with a caching layer and is not suitable for an OLAP use case.
INCORRECT: “Amazon ElastiCache for Redis with cluster mode enabled” is incorrect. This is an example of an in-memory caching service. It is good for performance for transactional use cases.