Neal Davis - Practice Test 4 - Incorrect Flashcards

1
Q

Question 26:
A company needs to transfer data from an Amazon EC2 instance to an Amazon S3 bucket. The company must prevent API calls and data from being routed over the public internet and must use a private connection. Only the single EC2 instance can have access to upload data to the S3 bucket.
Which solution will meet these requirements?

A. Create an Amazon S3 interface VPC endpoint in the subnet where the EC2 instance is located. Add a resource policy to the S3 bucket to allow only the EC2 instance’s IAM role access.

B. Obtain the private IP address of the S3 bucket’s service API endpoint through the management console. Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access.

C. Attach the appropriate security groups to the endpoint and use an S3 Bucket Policy on your S3 bucket to only allow the EC2 instance’s IAM role access to the bucket.

D. Run the nslookup tool from inside your EC2 instance to obtain the private IP address of the S3 bucket’s service API endpoint. Create a route in your VPCs route table to provide the EC2 instance with direct access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access.

A

Explanation
You can use two types of VPC endpoints to access Amazon S3: gateway endpoints and interface endpoints (using AWS PrivateLink). A gateway endpoint is a gateway that you specify in your route table to access Amazon S3 from your VPC over the AWS network. Interface endpoints extend the functionality of gateway endpoints by using private IP addresses to route requests to Amazon S3 from within your VPC, on premises, or from a VPC in another AWS Region using VPC peering or AWS Transit Gateway.
Using an Interface endpoint to grant access to your S3 bucket from your EC2 instance is a safe and reliable way of traversing the AWS Global Backbone, instead of moving data over the public internet. Adding a resource policy to only allow the EC2 instance IAM role will lockdown the access to this EC2 instance only.

CORRECT: “Create an Amazon S3 interface VPC endpoint in the subnet where the EC2 instance is located. Add a resource policy to the S3 bucket to allow only the EC2 instance’s IAM role access” is the correct answer (as explained above.)

INCORRECT: “Attach the appropriate security groups to the endpoint and use an S3 Bucket Policy on your S3 bucket to only allow the EC2 instance’s IAM role access to the bucket” is incorrect. This would not prevent you from sending your traffic over the public internet, and you would not meet the requirements.

INCORRECT: “Run the nslookup tool from inside your EC2 instance to obtain the private IP address of the S3 bucket’s service API endpoint. Create a route in your VPCs route table to provide the EC2 instance with direct access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access” is incorrect also. You would not need to use nslookup - as using an interface endpoint manages all of this for you.

INCORRECT: “Obtain the private IP address of the S3 bucket’s service API endpoint through the management console. Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access” is incorrect. Finding the private IP address of the S3 bucket’s service API endpoint is not possible through the console. Also this would not prevent you from sending your traffic over the public internet, and you would not meet the requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Question 47:
A Solutions Architect is tasked with designing a fully Serverless, Microservices based web application which requires the use of a GraphQL API to provide a single entry point to the application.
Which AWS managed service could the Solutions Architect use?

A. API Gateway

B. AWS Lambda

C. Amazon Athena

D. AWS AppSync

A

Explanation
AWS AppSync is a serverless GraphQL and Pub/Sub API service that simplifies building modern web and mobile applications.
AWS AppSync GraphQL APIs simplify application development by providing a single endpoint to securely query or update data from multiple databases, microservices, and APIs.

CORRECT: “AWS AppSync” is the correct answer (as explained above.)

INCORRECT: “API Gateway” is incorrect. You cannot create GraphQL APIs on API Gateway.

INCORRECT: “Amazon Athena” is incorrect. Amazon Athena is a Serverless query service where you can query S3 using SQL statements.

INCORRECT: “AWS Lambda” is incorrect. AWS Lambda is a serverless compute service and is not designed to build APIs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Question 53:
A Solutions Architect has placed an Amazon CloudFront distribution in front of their web server, which is serving up a highly accessed website, serving content globally. The Solutions Architect needs to dynamically route the user to a new URL depending on where the user is accessing from, through running a particular script. This dynamic routing will happen on every request, and as a result requires the code to run at extremely low latency, and low cost.
What solution will best achieve this goal?

A. Use Path Based Routing to route each user to the appropriate webpage behind an Application Load Balancer.

B. At the Edge Location, run your code with CloudFront Functions.

C. Use Route 53 Geo Proximity Routing to route users’ traffic to your resources based on their geographic location.

D. Redirect traffic by running your code within a Lambda function using Lambda@Edge.

A

Explanation
With CloudFront Functions in Amazon CloudFront, you can write lightweight functions in JavaScript for high-scale, latency-sensitive CDN customizations. Your functions can manipulate the requests and responses that flow through CloudFront, perform basic authentication and authorization, generate HTTP responses at the edge, and more. CloudFront Functions is approximately 1/6th the cost of Lambda@Edge and is extremely low latency as the functions are run on the host in the edge location, instead of the running on a Lambda function elsewhere.

CORRECT: “At the Edge Location, run your code with CloudFront Functions” is the correct answer (as explained above.)

INCORRECT: “Redirect traffic by running your code within a Lambda function using Lambda@Edge” is incorrect. Although you could achieve this using Lambda@Edge, the question states the need for the lowest latency possible, and comparatively the lowest latency option is CloudFront Functions.

INCORRECT: “Use Path Based Routing to route each user to the appropriate webpage behind an Application Load Balancer” is incorrect. This architecture does not account for the fact that custom code needs to be run to make this happen.

INCORRECT: “Use Route 53 Geo Proximity Routing to route users’ traffic to your resources based on their geographic location.’’ is incorrect. This may work, however again it does not account for the fact that custom code needs to be run to make this happen.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Question 31:
A large online retail company manages and runs an online e-commerce web application on AWS. This application serves hundreds of thousands of concurrent users during their peak operating hours, and as a result the company needs a highly scalable, near-real-time solution to share the order details with several other internal applications for order processing. Some additional processing to remove sensitive data also needs to occur before being stored in a document database for low-latency retrieval.
What should a solutions architect recommend to meet these requirements?

A. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.

B. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.

C. Store the transaction data in Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon writing. Use DynamoDB Streams to share the transaction data with other applications.

D. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.

A

Explanation
Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale. When connected to Amazon DynamoDB as an output the customer is able to scale to hundreds of thousands of concurrent users during their peak operating hours. KDS stores records for 24 hours by default so other applications can read the data.

CORRECT: “Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream” is the correct answer (as explained above.)

INCORRECT: “Store the transaction data in Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon writing. Use DynamoDB Streams to share the transaction data with other applications” is incorrect. There’s no capability to write rules that remove sensitive data in DynamoDB.

INCORRECT: “Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3” is incorrect. Amazon Kinesis Data Firehose cannot load data directly to Amazon DynamoDB as it is not a supported destination.

INCORRECT: “Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3” is incorrect. This is highly inefficient and storing data in a DynamoDB table would be a much better solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Question 64:
A company is planning to use an Amazon S3 bucket to store a large volume of customer transaction data. The data will be structured into a hierarchy of objects, and they require a solution for running complex queries as quickly as possible. The solution must minimize operational overhead.
Which solution meets these requirements?

A. Use AWS Glue to transform the data into Amazon Redshift tables and then perform the queries.

B. Use Amazon Athena on Amazon S3 to perform the queries.

C. Use AWS Elasticsearch on Amazon S3 to perform the queries.

D. Use AWS Data Pipeline to process and move the data to Amazon EMR and then perform the queries.

A

Explanation
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to setup or manage, and you can start analyzing data immediately. While Amazon Athena is ideal for quick, ad-hoc querying, it can also handle complex analysis, including large joins, window functions, and arrays.
Athena is the fastest way to query the data in Amazon S3 and offers the lowest operational overhead as it is a fully serverless solution.

CORRECT: “Use Amazon Athena on Amazon S3 to perform the queries” is the correct answer.

INCORRECT: “Use AWS Data Pipeline to process and move the data to Amazon EMR and then perform the queries” is incorrect. Amazon EMR is not required and would represent a more operationally costly solution.

INCORRECT: “Use AWS Elasticsearch on Amazon S3 to perform the queries” is incorrect. Elasticsearch cannot perform SQL queries and join tables for data in Amazon S3.

INCORRECT: “Use AWS Glue to transform the data into Amazon Redshift tables and then perform the queries” is incorrect. RedShift is not required and would represent a more operationally costly solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Question 22:
Every time an item in an Amazon DynamoDB table is modified a record must be retained for compliance reasons. What is the most efficient solution to recording this information?

A. Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and record the modified item data to an Amazon S3 bucket

B. Enable DynamoDB Global Tables. Enable DynamoDB streams on the multi-region table and save the output directly to an Amazon S3 bucket

C. Enable Amazon CloudTrail. Configure an Amazon EC2 instance to monitor activity in the CloudTrail log files and record changed items in another DynamoDB table

D. Enable Amazon CloudWatch Logs. Configure an AWS Lambda function to monitor the log files and record deleted item data to an Amazon S3 bucket

A

Explanation
Amazon DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours. Applications can access this log and view the data items as they appeared before and after they were modified, in near-real time.
For example, in the diagram below a DynamoDB stream is being consumed by a Lambda function which processes the item data and records a record in CloudWatch Logs:

CORRECT: “Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and record the modified item data to an Amazon S3 bucket” is the correct answer.

INCORRECT: “Enable Amazon CloudWatch Logs. Configure an AWS Lambda function to monitor the log files and record deleted item data to an Amazon S3 bucket” is incorrect. The deleted item data will not be recorded in CloudWatch Logs.

INCORRECT: “Enable Amazon CloudTrail. Configure an Amazon EC2 instance to monitor activity in the CloudTrail log files and record changed items in another DynamoDB table” is incorrect. CloudTrail records API actions so it will not record the data from the item that was modified.

INCORRECT: “Enable DynamoDB Global Tables. Enable DynamoDB streams on the multi-region table and save the output directly to an Amazon S3 bucket” is incorrect. Global Tables is used for creating a multi-region, multi-master database. It is of no additional value for this requirement as you could just enable DynamoDB streams on the main table. You also cannot save modified data straight to an S3 bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Question 41:
A Solutions Architect is designing an application that will run on an Amazon EC2 instance. The application must asynchronously invoke an AWS Lambda function to analyze thousands of .CSV files. The services should be decoupled.
Which service can be used to decouple the compute services?

A. Amazon OpsWorks

B. Amazon SWF

C. Amazon Kinesis

D. Amazon SNS

A

Explanation
You can use a Lambda function to process Amazon Simple Notification Service notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. This solution decouples the Amazon EC2 application from Lambda and ensures the Lambda function is invoked.

CORRECT: “Amazon SNS” is the correct answer.

INCORRECT: “Amazon SWF” is incorrect. The Simple Workflow Service (SWF) is used for process automation. It is not well suited to this requirement.

INCORRECT: “Amazon Kinesis” is incorrect as this service is used for ingesting and processing real time streaming data, it is not a suitable service to be used solely for invoking a Lambda function.

INCORRECT: “Amazon OpsWorks” is incorrect as this service is used for configuration management of systems using Chef or Puppet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Question 56:
An application is being monitored using Amazon GuardDuty. A Solutions Architect needs to be notified by email of medium to high severity events. How can this be achieved?

A. Create an Amazon CloudWatch Logs rule that triggers an AWS Lambda function

B. Configure an Amazon CloudTrail alarm the triggers based on GuardDuty API activity

C. Configure an Amazon CloudWatch alarm that triggers based on a GuardDuty metric

D. Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic

A

Explanation
A CloudWatch Events rule can be used to set up automatic email notifications for Medium to High Severity findings to the email address of your choice. You simply create an Amazon SNS topic and then associate it with an Amazon CloudWatch events rule.
Note: step by step procedures for how to set this up can be found in the article linked in the references below.

CORRECT: “Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic” is the correct answer.

INCORRECT: “Configure an Amazon CloudWatch alarm that triggers based on a GuardDuty metric” is incorrect. There is no metric for GuardDuty that can be used for specific findings.

INCORRECT: “Create an Amazon CloudWatch Logs rule that triggers an AWS Lambda function” is incorrect. CloudWatch logs is not the right CloudWatch service to use. CloudWatch events is used for reacting to changes in service state.

INCORRECT: “Configure an Amazon CloudTrail alarm the triggers based on GuardDuty API activity” is incorrect. CloudTrail cannot be used to trigger alarms based on GuardDuty API activity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Question 7:
A company plans to provide developers with individual AWS accounts. The company will use AWS Organizations to provision the accounts. A Solutions Architect must implement secure auditing using AWS CloudTrail so that all events from all AWS accounts are logged. The developers must not be able to use root-level permissions to alter the AWS CloudTrail configuration in any way or access the log files in the S3 bucket. The auditing solution and security controls must automatically apply to all new developer accounts that are created.
Which action should the Solutions Architect take?

A. Create an IAM policy that prohibits changes to CloudTrail and attach it to the root user.

B. Create a service-linked role for CloudTrail with a policy condition that allows changes only from an Amazon Resource Name (ARN) in the management account.

C. Create a new trail in CloudTrail from within the management account with the organization trails option enabled.

D. Create a service control policy (SCP) that prohibits changes to CloudTrail and attach it to the developer accounts.

A

Explanation
You can create a CloudTrail trail in the management account with the organization trails option enabled and this will create the trail in all AWS accounts within the organization.
Member accounts can see the organization trail but can’t modify or delete it. By default, member accounts don’t have access to the log files for the organization trail in the Amazon S3 bucket.

CORRECT: “Create a new trail in CloudTrail from within the management account with the organization trails option enabled” is the correct answer.

INCORRECT: “Create an IAM policy that prohibits changes to CloudTrail and attach it to the root user” is incorrect. You cannot restrict the root user this way and should use the organization trails option or an SCP instead.

INCORRECT: “Create a service-linked role for CloudTrail with a policy condition that allows changes only from an Amazon Resource Name (ARN) in the management account” is incorrect. You cannot create service-linked roles, these are created by AWS for you.

INCORRECT: “Create a service control policy (SCP) that prohibits changes to CloudTrail and attach it to the developer accounts” is incorrect. An SCP can achieve the required outcome of limiting the ability to change the CloudTrail configuration, but the trail must still be created in each account and the SCP must be attached which is not automatic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Question 17:
A financial institution with many departments wants to migrate to the AWS Cloud from their data center. Each department should have their own established AWS accounts with preconfigured, Limited access to authorized services, based on each team’s needs, by the principle of least privilege.
What actions should be taken to ensure compliance with these security requirements?

A. Deploy a Landing Zone within AWS Organizations. Allow department administrators to use the Landing Zone to create new member accounts and networking. Grant the department’s AWS power user permissions on the created accounts.

B. Use AWS CloudFormation to create new member accounts and networking and use IAM roles to allow access to approved AWS services.

C. Configure AWS Organizations with SCPs and create new member accounts. Use AWS CloudFormation templates to configure the member account networking.

D. Deploy a Landing Zone within AWS Control Tower. Allow department administrators to use the Landing Zone to create new member accounts and networking. Grant the department’s AWS power user permissions on the created accounts.

A

Explanation
AWS Control Tower automates the setup of a new landing zone using best practices blueprints for identity, federated access, and account structure.
The account factory automates provisioning of new accounts in your organization. As a configurable account template, it helps you standardize the provisioning of new accounts with pre-approved account configurations. You can configure your account factory with pre-approved network configuration and region selections.

CORRECT: “Deploy a Landing Zone within AWS Control Tower. Allow department administrators to use the Landing Zone to create new member accounts and networking. Grant the department’s AWS power user permissions on the created accounts” is the correct answer (as explained above.)

INCORRECT: “Use AWS CloudFormation to create new member accounts and networking and use IAM roles to allow access to approved AWS services” is incorrect. Although you could perhaps make new AWS Accounts with AWS CloudFormation, the easiest way to do that is by using AWS Control Tower.

INCORRECT: “Configure AWS Organizations with SCPs and create new member accounts. Use AWS CloudFormation templates to configure the member account networking” is incorrect. You can make new accounts using AWS Organizations however the easiest way to do this is by using the AWS Control Tower service.

INCORRECT: “Deploy a Landing Zone within AWS Organizations. Allow department administrators to use the Landing Zone to create new member accounts and networking. Grant the department’s AWS power user permissions on the created accounts” is incorrect. Landing Zones do not get deployed within AWS Organizations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Question 9:
A Solutions Architect has been tasked with building an application which stores images to be used for a website. The website will be accessed by thousands of customers. The images within the application need to be able to be transformed and processed as they are being retrieved. The solutions architect would prefer to use managed services to achieve this, and the solution should be highly available and scalable, and be able to serve users from around the world with low latency.

Which scenario represents the easiest solution for this task?

A. Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Object Lambda to transform and process the images whenever a GET request is initiated on an object.

B. Store the images in a DynamoDB table, with DynamoDB Accelerator enabled. Use Amazon EventBridge to pass the data into an event bus as it is retrieved from DynamoDB and use AWS Lambda to process the data.

C. Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Event Notifications to connect to a Lambda function to process and transform the images when a GET request is initiated on an object.

D. Store the images in a DynamoDB table, with DynamoDB Global Tables enabled. Provision a Lambda function to process the data on demand as it leaves the table.

A

Explanation
With S3 Object Lambda you can add your own code to S3 GET requests to modify and process data as it is returned to an application. For the first time, you can use custom code to modify the data returned by standard S3 GET requests to filter rows, dynamically resize images, redact confidential data, and much more. Powered by AWS Lambda functions, your code runs on infrastructure that is fully managed by AWS, eliminating the need to create and store derivative copies of your data or to run expensive proxies, all with no changes required to your applications.

CORRECT: “Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Object Lambda to transform and process the images whenever a GET request is initiated on an object” is the correct answer (as explained above.)

INCORRECT: “Store the images in a DynamoDB table, with DynamoDB Global Tables enabled. Provision a Lambda function to process the data on demand as it leaves the table” is incorrect. DynamoDB is not as well designed for Write Once Read Many workloads and adding a Lambda function to the DynamoDB table takes more manual provisioning of resources than using S3 Object Lambda.

INCORRECT: “Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Event Notifications to connect to a Lambda function to process and transform the images when a GET request is initiated on an object” is incorrect. This would work; however it is easier to use S3 Object Lambda as this manages the Lambda function for you.

INCORRECT: “Store the images in a DynamoDB table, with DynamoDB Accelerator enabled. Use Amazon EventBridge to pass the data into an event bus as it is retrieved from DynamoDB and use AWS Lambda to process the data” is incorrect. DynamoDB is not as well designed for Write Once Read Many workloads and adding a Lambda function to the DynamoDB table takes more manual provisioning of resources than using S3 Object Lambda.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Question 20:
A retail organization sends coupons out twice a week and this results in a predictable surge in sales traffic. The application runs on Amazon EC2 instances behind an Elastic Load Balancer. The organization is looking for ways lower costs while ensuring they meet the demands of their customers.
How can they achieve this goal?

A. Increase the instance size of the existing EC2 instances

B. Use a mixture of spot instances and on demand instances

C. Purchase Amazon EC2 dedicated hosts

D. Use capacity reservations with savings plans

A

Explanation
On-Demand Capacity Reservations enable you to reserve compute capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. By creating Capacity Reservations, you ensure that you always have access to EC2 capacity when you need it, for as long as you need it. When used in combination with savings plans, you can also gain the advantages of cost reduction.

CORRECT: “ Use capacity reservations with savings plans” is the correct answer.

INCORRECT: “Use a mixture of spot instances and on demand instances” is incorrect. You can mix spot and on-demand in an auto scaling group. However, there’s a risk the spot price may not be good, and this is a regular, predictable increase in traffic.

INCORRECT: “Increase the instance size of the existing EC2 instances” is incorrect. This would add more cost all of the time rather than catering for the temporary increases in traffic.

INCORRECT: “Purchase Amazon EC2 dedicated hosts” is incorrect. This is not a way to save cost as dedicated hosts are much more expensive than shared hosts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Question 29:
An application that runs a computational fluid dynamics workload uses a tightly-coupled HPC architecture that uses the MPI protocol and runs across many nodes. A service-managed deployment is required to minimize operational overhead.
Which deployment option is MOST suitable for provisioning and managing the resources required for this use case?

A. Use AWS Elastic Beanstalk to provision and manage the EC2 instances

B. Use Amazon EC2 Auto Scaling to deploy instances in multiple subnets

C. Use AWS Batch to deploy a multi-node parallel job

D. Use AWS CloudFormation to deploy a Cluster Placement Group on EC2

A

Explanation
AWS Batch Multi-node parallel jobs enable you to run single jobs that span multiple Amazon EC2 instances. With AWS Batch multi-node parallel jobs, you can run large-scale, tightly coupled, high
performance computing applications and distributed GPU model training without the need to launch, configure, and manage Amazon EC2 resources directly.
An AWS Batch multi-node parallel job is compatible with any framework that supports IP-based, internode communication, such as Apache MXNet, TensorFlow, Caffe2, or Message Passing Interface (MPI).
This is the most efficient approach to deploy the resources required and supports the application requirements most effectively.

CORRECT: “Use AWS Batch to deploy a multi-node parallel job” is the correct answer.

INCORRECT: “Use Amazon EC2 Auto Scaling to deploy instances in multiple subnets “ is incorrect. This is not the best solution for a tightly-coupled HPC workload with specific requirements such as MPI support.

INCORRECT: “Use AWS CloudFormation to deploy a Cluster Placement Group on EC2” is incorrect. This would deploy a cluster placement group but not manage it. AWS Batch is a better fit for large scale workloads such as this.

INCORRECT: “Use AWS Elastic Beanstalk to provision and manage the EC2 instances” is incorrect. You can certainly provision and manage EC2 instances with Elastic Beanstalk but this scenario is for a specific workload that requires MPI support and managing a HPC deployment across a large number of nodes. AWS Batch is more suitable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Question 33:
A company runs a containerized application on a Kubernetes cluster in an on-premises data center. The application uses a MongoDB Database to store data. The application will be migrated to AWS, but no code changes or deployment method changes are possible at this time due to a constraint in time and resources. Operational efficiency is critical.
Which solution meets these requirements?

A. Use Amazon Elastic Container Service (Amazon ECS) with worker nodes on Amazon EC2 for compute, as well as and MongoDB on EC2 for data storage.

B. Use Amazon Elastic Kubernetes Service (Amazon EKS) with worker nodes on Amazon EC2 for compute and Amazon DynamoDB for data storage.

C. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB for the data storage.

D. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data storage.

A

Explanation
The easiest way to lift this application out of the data center with minimal code changes is to use the Elastic Kubernetes Service (Amazon EKS) on Fargate for the compute tier and Amazon DocumentDB (with MongoDB compatibility) for data storage.

CORRECT: “Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data storage” is the correct answer (as explained above.)

INCORRECT: “Use Amazon Elastic Container Service (Amazon ECS) with worker nodes on Amazon EC2 for compute, as well as and MongoDB on EC2 for data storage” is incorrect. Using Amazon ECS will take some application refactoring, so it involves code changes and is not operationally efficient.

INCORRECT: “Use Amazon Elastic Kubernetes Service (Amazon EKS) with worker nodes on Amazon EC2 for compute and Amazon DynamoDB for data storage” is incorrect. Using DynamoDB would take a refactoring of the application code and is not operationally efficient.

INCORRECT: “Use Amazon Elastic Container Service (Amazon ECS) with worker nodes on Amazon EC2 for compute, as well as and MongoDB on EC2 for data storage” Using Amazon ECS will take some application refactoring, so it is not operationally efficient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Question 63:
A company runs a legacy application on an Amazon EC2 Linux instance. The application code cannot be modified, and the system cannot run on more than one instance. A Solutions Architect must design a resilient solution that can improve the recovery time for the system.
What should the Solutions Architect recommend to meet these requirements?

A. Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.

B. Launch the EC2 instance with two Amazon EBS volumes and configure RAID 1.

C. Deploy the EC2 instance in a cluster placement group in an Availability Zone.

D. Launch the EC2 instance with two Amazon EBS volumes and configure RAID 0.

A

Explanation
A RAID array uses multiple EBS volumes to improve performance or redundancy. When fault tolerance is more important than I/O performance a RAID 1 array should be used which creates a mirror of your data for extra redundancy.
The following table summarizes the differences between RAID 0 and RAID 1:

CORRECT: Launch the EC2 instance with two Amazon EBS volumes and configure RAID 1is the correct answer.

INCORRECT: “Launch the EC2 instance with two Amazon EBS volumes and configure RAID 0” is incorrect. RAID 0 is used for striping which improves performance but not redundancy.

INCORRECT: “Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure” is incorrect. This does not improve recovery time it just attempts to fix issues relating to the underlying hardware.

INCORRECT: “Deploy the EC2 instance in a cluster placement group in an Availability Zone” is incorrect. You cannot gain any advantages by deploying a single instance into a cluster placement group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Question 13:
A company has over 200 TB of log files in an Amazon S3 bucket. The company must process the files using a Linux-based software application that will extract and summarize data from the log files and store the output in a separate Amazon S3 bucket. The company needs to minimize data transfer charges associated with the processing of this data.
How can a Solutions Architect meet these requirements?

A. Use an on-premises virtual machine for processing the data. Retrieve the log files from the S3 bucket and upload the output to another S3 bucket in the same Region.

B. Connect an AWS Lambda function to the S3 bucket via a VPC endpoint. Process the log files and store the output to another S3 bucket in the same Region.

C. Launch an Amazon EC2 instance in the same Region as the S3 bucket. Process the log files and upload the output to another S3 bucket in a different Region.

D. Launch an Amazon EC2 instance in the same Region as the S3 bucket. Process the log files and upload the output to another S3 bucket in the same Region.

A

Explanation
The software application must be installed on a Linux operating system so we must use Amazon EC2 or an on-premises VM. To avoid data charges however, we must ensure that the data does not egress the AWS Region. The best solution to avoid the egress data charges is to use an Amazon EC2 instance in the same Region as the S3 bucket that contains the log files. The processed output files must also be stored in a bucket in the same Region to avoid any data going out from EC2 to another Region.

CORRECT: “Launch an Amazon EC2 instance in the same Region as the S3 bucket. Process the log files and upload the output to another S3 bucket in the same Region” is the correct answer.

INCORRECT: “Use an on-premises virtual machine for processing the data. Retrieve the log files from the S3 bucket and upload the output to another S3 bucket in the same Region” is incorrect. The data would need to egress the AWS Region incurring data transfer charges in this configuration.

INCORRECT: “Launch an Amazon EC2 instance in the same Region as the S3 bucket. Process the log files and upload the output to another S3 bucket in a different Region” is incorrect. The processed data would be going from the EC2 instance to a bucket in a different Region which would incur data transfer charges.

INCORRECT: “Connect an AWS Lambda function to the S3 bucket via a VPC endpoint. Process the log files and store the output to another S3 bucket in the same Region” is incorrect. You cannot install a Linux-based software application on AWS Lambda.

17
Q

Question 36:
A storage company creates and emails PDF statements to their customers at the end of each month. Customers must be able to download their statements from the company website for up to 30 days from when the statements were generated. When customers close their accounts, they are emailed a ZIP file that contains all the statements.
What is the MOST cost-effective storage solution for this situation?

A. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days.

B. Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Intelligent Tiering storage after 30 days.

C. Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier Deep Archive storage after 30 days.

D. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) storage after 30 days.

A

Explanation
The most cost-effective option is to store the PDF files in S3 Standard for 30 days where they can be easily downloaded by customers. Then, transition the objects to Amazon S3 Glacier which will reduce the storage costs. When a customer closes their account, the objects can be retrieved from S3 Glacier and provided to the customer as a ZIP file.
Be cautious of subtle changes to the answer options in questions like these as you may see several variations of similar questions on the exam. Also, be aware of the supported transitions (below) and minimum storage durations.

CORRECT: “Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days” is the correct answer.

INCORRECT: “Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier Deep Archive storage after 30 days” is incorrect. Using Glacier will not allow customers to download their statements as the files would need to be restored. Also, the minimum storage duration before you can transition from Glacier is 90 days.

INCORRECT: “Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) storage after 30 days” is incorrect. This would work but is not as cost-effective as using Glacier for the longer-term storage.

INCORRECT: “Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Intelligent Tiering storage after 30 days” is incorrect. This would work but is not as cost-effective as using Glacier for the longer-term storage.

18
Q

Question 40: Incorrect
A Solutions Architect needs to design a solution for providing a shared file system for company users in the AWS Cloud. The solution must be fault tolerant and should integrate with the company’s Microsoft Active Directory for access control.
Which storage solution meets these requirements?

A. Use Amazon S3 for storing the data and configure AWS Cognito to connect S3 to Active Directory for access control.

B. Use an Amazon EC2 Windows instance to create a file share. Attach Amazon EBS volumes in different Availability Zones.

C. Create a file system with Amazon FSx for Windows File Server and enable Multi-AZ. Join Amazon FSx to Active Directory.

D. Create an Amazon EFS file system and configure AWS Single Sign-On with Active Directory.

A

Explanation
Amazon FSx for Windows File Server provides fully managed Microsoft Windows file servers, backed by a fully native Windows file system. Multi-AZ file systems provide high availability and failover support across multiple Availability Zones by provisioning and maintaining a standby file server in a separate Availability Zone within an AWS Region.
Amazon FSx works with Microsoft Active Directory (AD) to integrate with your existing Microsoft Windows environments. Active Directory is the Microsoft directory service used to store information about objects on the network and make this information easy for administrators and users to find and use.

CORRECT: “Create a file system with Amazon FSx for Windows File Server and enable Multi-AZ. Join Amazon FSx to Active Directory” is the correct answer.

INCORRECT: “Create an Amazon EFS file system and configure AWS Single Sign-On with Active Directory” is incorrect. You cannot configure AWS SSO for an EFS file system with Active Directory.

INCORRECT: “Use an Amazon EC2 Windows instance to create a file share. Attach Amazon EBS volumes in different Availability Zones” is incorrect. You cannot attach EBS volumes in different AZs to an instance.

INCORRECT: “Use Amazon S3 for storing the data and configure AWS Cognito to connect S3 to Active Directory for access control” is incorrect. You cannot use Cognito to connect S3 to Active Directory.

19
Q

Question 59:
A large MongoDB database running on-premises must be migrated to Amazon DynamoDB within the next few weeks. The database is too large to migrate over the company’s limited internet bandwidth so an alternative solution must be used. What should a Solutions Architect recommend?

A. Enable compression on the MongoDB database and use the AWS Database Migration Service (DMS) to directly migrate the database to Amazon DynamoDB

B. Use the AWS Database Migration Service (DMS) to extract and load the data to an AWS Snowball Edge device. Complete the migration to Amazon DynamoDB using AWS DMS in the AWS Cloud

C. Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB

D. Setup an AWS Direct Connect and migrate the database to Amazon DynamoDB using the AWS Database Migration Service (DMS)

A

Explanation
Larger data migrations with AWS DMS can include many terabytes of information. This process can be cumbersome due to network bandwidth limits or just the sheer amount of data. AWS DMS can use Snowball Edge and Amazon S3 to migrate large databases more quickly than by other methods.
When you’re using an Edge device, the data migration process has the following stages:
1. You use the AWS Schema Conversion Tool (AWS SCT) to extract the data locally and move it to an Edge device.
2. You ship the Edge device or devices back to AWS.
3. After AWS receives your shipment, the Edge device automatically loads its data into an Amazon S3 bucket.
4. AWS DMS takes the files and migrates the data to the target data store. If you are using change data capture (CDC), those updates are written to the Amazon S3 bucket and then applied to the target data store.

CORRECT: “Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB” is the correct answer.

INCORRECT: “Setup an AWS Direct Connect and migrate the database to Amazon DynamoDB using the AWS Database Migration Service (DMS)” is incorrect as Direct Connect connections can take several weeks to implement.

INCORRECT: “Enable compression on the MongoDB database and use the AWS Database Migration Service (DMS) to directly migrate the database to Amazon DynamoDB” is incorrect. It’s unlikely that compression is going to make the difference and the company want to avoid the internet link as stated in the scenario.

INCORRECT: “Use the AWS Database Migration Service (DMS) to extract and load the data to an AWS Snowball Edge device. Complete the migration to Amazon DynamoDB using AWS DMS in the AWS Cloud” is incorrect. This is the wrong method, the Solutions Architect should use the SCT to extract and load to Snowball Edge and then AWS DMS in the AWS Cloud.