Question2 Flashcards

1
Q

A Solutions Architect has deployed an application on Amazon EC2 instances in a private subnet behind a Network Load Balancer (NLB) in a public subnet. Customers have attempted to connect from their office location and are unable to access the application. That targets were registered by instance-id and are all healthy in the associated target group.
What step should the Solutions Architect take to resolve the issue and enable access for the customers?
• ​
Check the security group for the EC2 instances to ensure it allows ingress from the NLB subnets.
• ​
Check the security group for the EC2 instances to ensure it allows ingress from the customer office.
• ​
Check the security group for the NLB to ensure it allows egress to the private subnet.
• ​
Check the security group for the NLB to ensure it allows ingress from the customer office.

A

• ​
Check the security group for the EC2 instances to ensure it allows ingress from the customer office.
(Correct)

Explanation
The Solutions Architect should check that the security group of the EC2 instances is allowing inbound connections from the customer office IP ranges. Note that NLBs do not have security groups configured and pass connections straight to EC2 instances with the source IP of the client preserved (when registered by instance-id).
With NLBs, when you register EC2 instances as targets, you must ensure that the security groups for these instances allow traffic on both the listener port and the health check port. We know that the health check port is already configured correctly as the targets are all healthy.
CORRECT:”Check the security group for the EC2 instances to ensure it allows ingress from the customer office” is the correct answer.
INCORRECT:”Check the security group for the EC2 instances to ensure it allows ingress from the NLB subnets” is incorrect. This is not necessary as the source IPs of clients are preserved.
INCORRECT:”Check the security group for the NLB to ensure it allows ingress from the customer office” is incorrect. There is no security group associated with an NLB.
INCORRECT:”Check the security group for the NLB to ensure it allows egress to the private subnet” is incorrect. There is no security group associated with an NLB.
References:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#target-security-groups
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-security-identity-compliance/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company plans to build a gaming application in the AWS Cloud that will be used by Internet-based users. The application will run on a single instance and connections from users will be made over the UDP protocol. The company has requested that the service is implemented with a high level of security. A Solutions Architect has been asked to design a solution for the application on AWS.
Which combination of steps should the Solutions Architect take to meet these requirements? (Select THREE.)
• ​
Use a Network Load Balancer (NLB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the NLB’s Elastic IP address.

• ​
Configure a network ACL rule to block all non-UDP traffic. Associate the network ACL with the subnets that hold the load balancer instances.

• ​
Enable AWS Shield Advanced on all public-facing resources.

• ​
Use an Application Load Balancer (ALB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the ALB’s internet-facing fully qualified domain name (FQDN).
• ​
Use AWS Global Accelerator with an Elastic Load Balancer as an endpoint.
• ​
Define an AWS WAF rule to explicitly drop non-UDP traffic and associate the rule with the load balancer.

A

• ​
Use a Network Load Balancer (NLB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the NLB’s Elastic IP address.
(Correct)
• ​
Configure a network ACL rule to block all non-UDP traffic. Associate the network ACL with the subnets that hold the load balancer instances.
(Correct)
• ​
Enable AWS Shield Advanced on all public-facing resources.
(Correct)
• ​

Explanation
The Network Load Balancer (NLB) supports the UDP protocol and can be placed in front of the application instance. This configuration may add some security if the instance is running in a private subnet.
An NLB can be configured with an Elastic IP in each subnet in which it has nodes. In this case it only has a single subnet (one instance) and so there will be 1 EIP.
Route 53 can be configured to resolve directly to the EIP rather than the DNS name of the NLB as there is only one IP address to return. To filter traffic the network ACL for the subnet can be configured to block all non-UDP traffic.
This solution meets all the stated requirements.
CORRECT:”Use a Network Load Balancer (NLB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the NLB’s Elastic IP address” is a correct answer.
CORRECT:”Configure a network ACL rule to block all non-UDP traffic. Associate the network ACL with the subnets that hold the load balancer instances” is also a correct answer.
CORRECT:”Enable AWS Shield Advanced on all public-facing resources” is also a correct answer.
INCORRECT:”Use an Application Load Balancer (ALB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the ALB’s internet-facing fully qualified domain name (FQDN)” is incorrect. An ALB only listens for HTTP and HTTPS traffic which uses the TCP protocol. It does not support UDP.
INCORRECT:”Define an AWS WAF rule to explicitly drop non-UDP traffic and associate the rule with the load balancer” is incorrect. WAF works with ALBs but not with NLBs. WAF is also unnecessary as a network ACL can filter the traffic.
INCORRECT:”Use AWS Global Accelerator with an Elastic Load Balancer as an endpoint” is incorrect. AWS Global Accelerator provides improved performance and high availability when you have copies of your application running in multiple AWS Regions. It is not required in this solution.
References:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-compute/
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-security-identity-compliance/
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-security-identity-compliance/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company is planning to migrate an application from an on-premises data center to the AWS Cloud. The application consists of a stateful servers and a separate MySQL database. The application is expected to receive significant traffic and must scale seamlessly. The solution design on AWS includes an Amazon Aurora MySQL database, Amazon EC2 Auto Scaling and Elastic Load Balancing.
A Solutions Architect needs to finalize the design for the solution. Which of the following configurations will ensure a consistent user experience and seamless scalability for both the application and database tiers?
• ​
Add Aurora Replicas and define a scaling policy. Use a Network Load Balancer and set the load balancing algorithm type to least_outstanding_requests.
• ​
Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to round_robin.

• ​
Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to least_outstanding_requests.
• ​
Add Aurora Replicas and define a scaling policy. Use a Network Load Balancer and set the load balancing algorithm type to round_robin.

A

• ​
Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to round_robin.
(Correct)

Explanation
Aurora Auto Scaling dynamically adjusts the number of Aurora Replicas provisioned for an Aurora DB cluster using single-master replication. You define and apply a scaling policy to an Aurora DB cluster.
Thescaling policydefines the minimum and maximum number of Aurora Replicas that Aurora Auto Scaling can manage. Based on the policy, Aurora Auto Scaling adjusts the number of Aurora Replicas up or down in response to actual workloads, determined by using Amazon CloudWatch metrics and target values.
By default, the round robin routing algorithm is used to route requests at the target group level. You can specify the least outstanding requests routing algorithm instead.
Consider using least outstanding requests when the requests for your application vary in complexity or your targets vary in processing capability. Round robin is a good choice when the requests and targets are similar, or if you need to distribute requests equally among targets.
In this case the round robin algorithm will be the best choice as the instances will have the same processing capability and requests should be routed evenly between them.
CORRECT:”Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to round_robin” is the correct answer.
INCORRECT:”Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to least_outstanding_requests” is incorrect. The least outstanding requests algorithm is not the best choice here as explained above.
INCORRECT:”Add Aurora Replicas and define a scaling policy. Use a Network Load Balancer and set the load balancing algorithm type to least_outstanding_requests” is incorrect. The NLB does not use this algorithm, it uses a flow hash algorithm.
INCORRECT:”Add Aurora Replicas and define a scaling policy. Use a Network Load Balancer and set the load balancing algorithm type to round_robin” is incorrect. The NLB does not use this algorithm, it uses a flow hash algorithm.
References:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Integrating.AutoScaling.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-database/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A web application is composed of an Application Load Balancer and EC2 instances across three Availability Zones. During peak load, the web servers operate at 95% utilization. The system is set up to use Reserved Instances to handle steady state load and On-Demand Instances to handle the peak load. Your manager instructed you to review the current architecture and do the necessary changes to improve the system.
Which of the following provides the most cost-effective architecture to allow the application to recover quickly in the event that an Availability Zone is unavailable during peak load?
• ​
Launch an Auto Scaling group of Reserved instances on each AZ to handle the peak load. Retain the current set up for handling the steady state load.
• ​
Use a combination of Reserved and On-Demand instances on each AZ to handle both the steady state and peak load.
• ​
Launch a Spot Fleet using a diversified allocation strategy, with Auto Scaling enabled on each AZ to handle the peak load instead of On-Demand instances. Retain the current set up for handling the steady state load.

• ​
Use a combination of Spot and On-Demand instances on each AZ to handle both the steady state and peak load.

A

• ​
Launch a Spot Fleet using a diversified allocation strategy, with Auto Scaling enabled on each AZ to handle the peak load instead of On-Demand instances. Retain the current set up for handling the steady state load.
(Correct)

Explanation
The scenario requires a cost-effective architecture to allow the application to recover quickly, hence, using an Auto Scaling group is a must to handle the peak load and improve both the availability and scalability of the application. Since the options that say:Use a combination of Spot and On-Demand instances on each AZ to handle both the steady state and peak loadanduse a combination of Reserved and On-Demand instances on each AZ to handle both the steady state and peak loaddid not mention the use of Auto Scaling Groups, these options are incorrect.

Reserved Instances cost more than Spot instances so it is more suitable to use the latter to handle the peak load. That is whylaunching an Auto Scaling group of Reserved instances on each AZ to handle the peak load and retaining the current set up for handling the steady state loadis wrong even though it uses Auto Scaling.
Setting up a diversified allocation strategy for your Spot Fleet is a best practice to increase the chances that a spot request can be fulfilled by EC2 capacity in the event of an outage in one of the Availability Zones. You can include each AZ available to you in the launch specification. And instead of using the same subnet each time, use three unique subnets (each mapping to a different AZ).

Reference:

https: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-examples.html#fleet-config5
https: //d1.awsstatic.com/whitepapers/total-cost-of-operation-benefits-using-aws.pdf#page=11
https: //github.com/awsdocs/amazon-ec2-user-guide/pull/56

Check out this AWS Billing and Cost Management Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-aws-billing-and-cost-management/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company runs a web application in an on-premises data center in Paris. The application includes stateless web servers behind a load balancer, shared files in a NAS device, and a MySQL database server. The company plans to migrate the solution to AWS and has the following requirements:
· Provide optimum performance for customers.
· Implement elastic scalability for the web tier.
· Optimize the database server performance for read-heavy workloads.
· Reduce latency for users across Europe and the US.
· Design the new architecture with a 99.9% availability SLA.
Which solution should a Solutions Architect propose to meet these requirements while optimizing operational efficiency?
• ​
Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in two AWS Regions and three Availability Zones in each Region. Configure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySQL database. Move the shared files to Amazon FSx with cross-Region synchronization. Configure Amazon CloudFront with the ALB as the origin and a price class that includes the US and Europe.
• ​
Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in one AWS Region and three Availability Zones. Configure an Amazon DocumentDB table in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes all global locations.
• ​
Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in one AWS Region and three Availability Zones. Configure an Amazon ElastiCache cluster in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes the US and Europe.
• ​
Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in two AWS Regions and two Availability Zones in each Region. Configure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySQL database. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes the US and Europe. Configure EFS cross-Region replication.

A

• ​
Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in one AWS Region and three Availability Zones. Configure an Amazon ElastiCache cluster in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes the US and Europe.
(Correct)

Explanation
To meet the 99.9% availability SLA a solution in a single Region with Auto Scaling and Load Balancing across multiple AZs is sufficient. To optimize the DB for read-heavy workloads, Amazon ElastiCache can be placed in front of an Aurora MySQL DB. The shared files can be easily moved to an Amazon EFS file system. CloudFront can be used to reduce latency for users in different geographies. In this case US and Europe price classes can be selected in CloudFront and this will cache the content in those locations only which reduces cost.
CORRECT:"Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in one AWS Region and three Availability Zones. Configure an Amazon ElastiCache cluster in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes the US and Europe" is the correct answer.
INCORRECT:"Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in two AWS Regions and two Availability Zones in each Region. Configure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySQL database. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes the US and Europe. Configure EFS cross-Region replication" is incorrect.
There’s no such thing as EFS cross-Region replication so the shared files cannot be synchronized that way. There’s also no need to have a cross-Region solution to meet a 99.9% availability SLA and there’s no mechanism mentioned for directing traffic between Regions.
INCORRECT:"Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in one AWS Region and three Availability Zones. Configure an Amazon DocumentDB table in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes all global locations" is incorrect.
DocumentDB is not a caching engine and cannot be used in front of an Aurora DB. CloudFront should not use a price class the includes all global locations as this will be more costly and is not required in the solution.
INCORRECT:"Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in two AWS Regions and three Availability Zones in each Region. Configure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySQL database. Move the shared files to Amazon FSx with cross-Region synchronization. Configure Amazon CloudFront with the ALB as the origin and a price class that includes the US and Europe" is incorrect.
Amazon FSx also does not have a feature for cross-Region synchronization. There’s also no need to have a cross-Region solution to meet a 99.9% availability SLA and there’s no mechanism mentioned for directing traffic between Regions.
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PriceClass.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-networking-content-delivery/
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company runs an eCommerce web application on a pair of Amazon EC2 instances behind an Application Load Balancer. The application stores data in an Amazon DynamoDB table. Traffic has been increasing with some major sales events and read and write traffic has slowed down considerably over the busiest periods.
Which option provides a scalable application architecture to handle peak traffic loads with the LEAST development effort?
• ​
Use Auto Scaling groups for the web application and use DynamoDB auto scaling.

• ​
Use Auto Scaling groups for the web application and use Amazon Simple Queue Service (Amazon SQS) and an AWS Lambda function to write to DynamoDB.
• ​
Use AWS Lambda for the web application. Configure DynamoDB to use global tables.
• ​
Use AWS Lambda for the web application. Increase the read and write capacity of DynamoDB.

A

• ​
Use Auto Scaling groups for the web application and use DynamoDB auto scaling.
(Correct)

Explanation
This is a simple case of needing to add elasticity to the application. The question specifically states that the chosen option must incur the least development effort. Therefore, the best option is to simply use Amazon EC2 Auto Scaling for the web application and enable auto scaling for DynamoDB.
This solution provides a simple way to enable elasticity and does not require any refactoring of the application or updates to code.
CORRECT:”Use Auto Scaling groups for the web application and use DynamoDB auto scaling” is the correct answer.
INCORRECT:”Use Auto Scaling groups for the web application and use Amazon Simple Queue Service (Amazon SQS) and an AWS Lambda function to write to DynamoDB” is incorrect. In this scenario it would be simpler and require less development effort to use Auto Scaling for both layers.
INCORRECT:”Use AWS Lambda for the web application. Increase the read and write capacity of DynamoDB” is incorrect. This requires major updates to the application code which is more development effort.
INCORRECT:”Use AWS Lambda for the web application. Configure DynamoDB to use global tables” is incorrect. This requires major updates to the application code which is more development effort.
References:
https://aws.amazon.com/ec2/autoscaling/
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-database/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A Solutions Architect is migrating an application to AWS Fargate. The task runs in a private subnet and does not have direct connectivity to the internet. When the Fargate task is launched, it fails with the following error:
CannotPullContainerError: API error (500): Get https://111122223333.dkr.ecr.us-east-1.amazonaws.com/v2/: net/http: request canceled while waiting for connection”
What should the Solutions Architect do to correct the error?
• ​
SpecifyENABLEDforAuto-assign public IPwhen launching the task.
• ​
Enable dual-stack in the Amazon ECS account settings and configure the network for the task to use awsvpc.
• ​
SpecifyDISABLEDforAuto-assign public IPwhen launching the task and configure a NAT gateway in a public subnet to route requests to the internet.
• ​
SpecifyDISABLEDforAuto-assign public IPwhen launching the task and configure a NAT gateway in a private subnet to route requests to the internet.

A

• ​
SpecifyDISABLEDforAuto-assign public IPwhen launching the task and configure a NAT gateway in a public subnet to route requests to the internet.

When a Fargate task is launched, its elastic network interface requires a route to the internet to pull container images. If you receive an error similar to the following when launching a task, it is because a route to the internet does not exist:
CannotPullContainerError: API error (500): Get https://111122223333.dkr.ecr.us-east-1.amazonaws.com/v2/: net/http: request canceled while waiting for connection”
To resolve this issue, you can:
- For tasks in public subnets, specifyENABLEDforAuto-assign public IPwhen launching the task.
- For tasks in private subnets, specifyDISABLEDforAuto-assign public IPwhen launching the task, and configure a NAT gateway in your VPC to route requests to the internet.
CORRECT:”SpecifyDISABLEDforAuto-assign public IPwhen launching the task and configure a NAT gateway in a public subnet to route requests to the internet” is the correct answer.
INCORRECT:”SpecifyDISABLEDforAuto-assign public IPwhen launching the task and configure a NAT gateway in a private subnet to route requests to the internet” is incorrect. The NAT Gateway should be in a public subnet.
INCORRECT:”SpecifyENABLEDforAuto-assign public IPwhen launching the task” is incorrect. This will not work as the task is running in a private subnet and will not pick up a public IP.
INCORRECT:”Enable dual-stack in the Amazon ECS account settings and configure the network for the task to use awsvpc” is incorrect. This is used to enable IPv6 for a task but that is not required in this situation.
References:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_cannot_pull_image.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-containers/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company is planning to migrate 30 small applications to AWS. The applications run on a mixture of Node.js and Python across a cluster of virtual servers on-premises. The company must minimize costs and standardize on a single deployment methodology for all applications. The applications have various usage patterns but generally have a low number of concurrent users. The applications use an average usage of 1 GB of memory with up to 3 GB during peak processing periods which can last several hours.
What is the MOST cost effective solution for these requirements?
• ​
Migrate the applications to Docker containers on Amazon ECS. Create a separate ECS task and service for each application. Enable service Auto Scaling based on memory utilization and set the threshold to 75%. Monitor services and hosts by using Amazon CloudWatch.
• ​
Migrate the applications to Amazon EC2 instances in Auto Scaling groups. Create separate target groups for each application behind an Application Load Balancer and use host-based routing. Configure Auto Scaling to scale based on memory utilization and set the threshold to 75%.
• ​
Migrate the applications to separate AWS Elastic Beanstalk environments. Enable Auto Scaling to ensure there are sufficient resources during peak processing periods. Monitor each AWS Elastic Beanstalk deployment with using CloudWatch alarms.
• ​
Migrate the applications to run on AWS Lambda with a separate function for each application. Use AWS CloudTrail logs and Amazon CloudWatch alarms to verify completion of important processes.

A

• ​
Migrate the applications to Docker containers on Amazon ECS. Create a separate ECS task and service for each application. Enable service Auto Scaling based on memory utilization and set the threshold to 75%. Monitor services and hosts by using Amazon CloudWatch.
(Correct)

This is a good use case for Docker containers as the applications are small, need to scale based on memory usage, and have processes running that last several hours. Amazon ECS publishes CloudWatch metrics with your service’s average CPU and memory usage and this can be used with Service Auto Scaling to increase or decrease the desired count of tasks in the Amazon ECS service automatically.
The diagram below shows the components of an ECS cluster using the EC2 launch type.

CORRECT:”Migrate the applications to Docker containers on Amazon ECS. Create a separate ECS task and service for each application. Enable service Auto Scaling based on memory utilization and set the threshold to 75%. Monitor services and hosts by using Amazon CloudWatch” is the correct answer.
INCORRECT:”Migrate the applications to run on AWS Lambda with a separate function for each application. Use AWS CloudTrail logs and Amazon CloudWatch alarms to verify completion of important processes” is incorrect. The peak processing periods run for several hours so this is going to rule out AWS Lambda which has a maximum execution time of 900 seconds.
INCORRECT:”Migrate the applications to separate AWS Elastic Beanstalk environments. Enable Auto Scaling to ensure there are sufficient resources during peak processing periods. Monitor each AWS Elastic Beanstalk deployment with using CloudWatch alarms” is incorrect. This will be less cost-efficient than using ECS Tasks.
INCORRECT:”Migrate the applications to Amazon EC2 instances in Auto Scaling groups. Create separate target groups for each application behind an Application Load Balancer and use host-based routing. Configure Auto Scaling to scale based on memory utilization and set the threshold to 75%” is incorrect. You cannot scale EC2 based on memory utilization unless you configure a custom metric in CloudWatch. This is also less cost-effective than using ECS tasks.
References:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-containers/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company uses multiple AWS accounts. There are separate accounts for development, staging, and production environments. Some new requirements have been issued to control costs and improve the overall governance of the AWS accounts. The company must be able to calculate costs associated with each project and each environment. Commonly deployed IT services must be centrally managed and business units should be restricted to deploying pre-approved IT services only.
Which combination of actions should be taken to meet these requirements? (Select TWO.)
• ​
Apply environment, cost center, and application name tags to all resources that accept tags.

• ​
Create an AWS Service Catalog portfolio for each business unit and add products to the portfolios using AWS CloudFormation templates.

• ​
Use AWS Savings Plans to configure budget thresholds and send alerts to management.
• ​
Use Amazon CloudWatch to create a billing alarm that notifies managers when a billing threshold is reached or exceeded.
• ​
Configure custom budgets and define thresholds using AWS Cost Explorer.

A

• ​
Apply environment, cost center, and application name tags to all resources that accept tags.
(Correct)
• ​
Create an AWS Service Catalog portfolio for each business unit and add products to the portfolios using AWS CloudFormation templates.
(Correct)

AWS Service Catalog enables organizations to create and manage catalogs of IT services that are approved for AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.
AWS Service Catalog allows organizations to centrally manage commonly deployed IT services, and helps organizations achieve consistent governance and meet compliance requirements. End users can quickly deploy only the approved IT services they need, following the constraints set by the organization.
To track the costs associated with projects and environments cost allocation tags should be applied to the relevant resources. Cost allocation tags are used to track AWS costs on a detailed level. After you activate cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report, to make it easier for you to categorize and track your AWS costs.
CORRECT:”Apply environment, cost center, and application name tags to all resources that accept tags” is a correct answer.
CORRECT:”Create an AWS Service Catalog portfolio for each business unit and add products to the portfolios using AWS CloudFormation templates” is also a correct answer.
INCORRECT:”Configure custom budgets and define thresholds using AWS Cost Explorer” is incorrect. Cost Explorer is used for viewing cost related information but not for creating budgets.
INCORRECT:”Use AWS Savings Plans to configure budget thresholds and send alerts to management” is incorrect as this is not a service but a pricing model and cannot be used for sending alerts.
INCORRECT:”Use Amazon CloudWatch to create a billing alarm that notifies managers when a billing threshold is reached or exceeded” is incorrect. There is no requirement to create billing alarms specified in the scenario.
References:
https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-cost-management/
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-cost-management/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company uses Amazon RedShift for analytics. Several teams deploy and manage their own RedShift clusters and management has requested that the costs for these clusters is better managed. The management team has set budgets and once the budgetary thresholds have been reached a notification should be sent to a distribution list for managers. Teams should be able to view their RedShift cluster’s expenses to date. A Solutions Architect needs to create a solution that ensures the policy is centrally enforced in a multi-account environment.
Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.)
• ​
Create an Amazon CloudWatch metric for billing. Create a custom alert when costs exceed the budgetary threshold.
• ​
Create an AWS Service Catalog portfolio for each team. Add each team’s Amazon RedShift cluster as an AWS CloudFormation template to their Service Catalog portfolio as a Product.

• ​
Update the AWS CloudFormation template to include the AWS::Budgets::Budget::resource with the NotificationsWithSubscribers property.

• ​
Install the unified CloudWatch Agent on the RedShift cluster hosts. Track the billing metric data in CloudWatch and trigger an alarm when a threshold is reached.
• ​
Create an AWS CloudTrail trail that tracks data events. Configure Amazon CloudWatch to monitor the trail and trigger an alarm when billing metrics exceed a certain threshold.

A

• ​
Create an AWS Service Catalog portfolio for each team. Add each team’s Amazon RedShift cluster as an AWS CloudFormation template to their Service Catalog portfolio as a Product.
(Correct)
• ​
Update the AWS CloudFormation template to include the AWS::Budgets::Budget::resource with the NotificationsWithSubscribers property.
(Correct)

You can use AWS Budgets to track your service costs and usage within AWS Service Catalog. You can associate budgets with AWS Service Catalog products and portfolios.
AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.
If a budget is associated to a product, you can view information about the budget on theProductsandProduct detailspage. If a budget is associated to a portfolio, you can view information about the budget on thePortfoliosandPortfolio detailspage.
When you click on a product or portfolio, you are taken to a detail page. ThesePortfolio detailandProduct detailpages have a section with detailed information about the associated budget. You can see the budgeted amount, current spend, and forecasted spend. You also have the option to view budget details and edit the budget.
CORRECT:”Update the AWS CloudFormation template to include the AWS::Budgets::Budget::resource with the NotificationsWithSubscribers property” is a correct answer.
CORRECT:”Create an AWS Service Catalog portfolio for each team. Add each team’s Amazon RedShift cluster as an AWS CloudFormation template to their Service Catalog portfolio as a Product” is also a correct answer.
INCORRECT:”Install the unified CloudWatch Agent on the RedShift cluster hosts. Track the billing metric data in CloudWatch and trigger an alarm when a threshold is reached” is incorrect. This agent is used on EC2 instances for sending additional metric data and logs to CloudWatch. However, it is not used for budgeting.
INCORRECT:”Create an AWS CloudTrail trail that tracks data events. Configure Amazon CloudWatch to monitor the trail and trigger an alarm when billing metrics exceed a certain threshold” is incorrect. CloudTrail tracks API calls, it cannot be used for tracking billing data.
INCORRECT:”Create an Amazon CloudWatch metric for billing. Create a custom alert when costs exceed the budgetary threshold” is incorrect. Billing data is automatically collected, you cannot create a metric for billing but you can create an alarm.
References:
https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_budgets.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-budgets-budget.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-management-governance/
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-cost-management/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A global enterprise company is in the process of creating an infrastructure services platform for its users. The company has the following requirements:
· Centrally manage the creation of infrastructure services using a central AWS account.
· Distribute infrastructure services to multiple accounts in AWS Organizations.
· Follow the principle of least privilege to limit end users’ permissions for launching and managing applications.
Which combination of actions using AWS services will meet these requirements? (Select TWO.)
• ​
Allow IAM users to have AWSServiceCatalogEndUserReadOnlyAccess permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.

• ​
Define the infrastructure services in AWS CloudFormation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the AWS Organizations structure created for the company.

• ​
Grant IAM users AWSCloudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at the AWS account root user level to deny all services except AWS CloudFormation and Amazon S3.
• ​
Define the infrastructure services in AWS CloudFormation templates. Add the templates to a central Amazon S3 bucket and add the IAM users that require access to the S3 bucket policy.
• ​
Allow IAM users to have AWSServiceCatalogEndUserFullAccess permissions. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.

A

Allow IAM users to have AWSServiceCatalogEndUserReadOnlyAccess permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
(Correct)

• ​
Define the infrastructure services in AWS CloudFormation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the AWS Organizations structure created for the company.
(Correct)

Explanation
There are three core requirements for this solution. The first two requirements are satisfied by adding each CloudFormation template to a product in AWS Service Catalog in a central AWS account and then sharing the portfolio with AWS Organizations.
In this model, the central AWS account hosts the organizationally approved infrastructure services and shares them to other AWS accounts in the company. AWS Service Catalog administrators can reference an existing organization in AWS Organizations when sharing a portfolio, and they can share the portfolio with any trusted organizational unit (OU) in the organization’s tree structure.
The third requirement is satisfied by using a permissions policy with read only access to AWS Service Catalog combined with a launch constraint that will use a dedicated IAM role that ensures least privilege access.
Without a launch constraint, end users must launch and manage products using their own IAM credentials. To do so, they must have permissions for AWS CloudFormation, the AWS services used by the products, and AWS Service Catalog. By using a launch role, you can instead limit the end users’ permissions to the minimum that they require for that product.
CORRECT:”Define the infrastructure services in AWS CloudFormation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the AWS Organizations structure created for the company” is a correct answer.
CORRECT:”Allow IAM users to have AWSServiceCatalogEndUserReadOnlyAccess permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints” is also a correct answer.
INCORRECT:”Define the infrastructure services in AWS CloudFormation templates. Add the templates to a central Amazon S3 bucket and add the IAM users that require access to the S3 bucket policy” is incorrect. This uses a central account but doesn’t have offer a mechanism to distribute the templates to accounts in AWS Organizations. It would also be very hard to manage access when adding users to bucket policies.
INCORRECT:”Grant IAM users AWSCloudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at the AWS account root user level to deny all services except AWS CloudFormation and Amazon S3” is incorrect. When launching services using CloudFormation, the principal used (user or role) must have permissions to the AWS services being launched through the template. This solution does not provide those permissions.
INCORRECT:”Allow IAM users to have AWSServiceCatalogEndUserFullAccess permissions. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints” is incorrect. Users do not need full access, read only is sufficient as it does not provide the ability for users to launch and manage products using their own accounts. The launch constraint provides the necessary permissions for launching products using an assigned role.
References:
https://docs.aws.amazon.com/servicecatalog/latest/adminguide/controlling_access.html
https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-servicecatalog.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-networking-content-delivery/

https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-management-governance/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company has deployed a new application into an Amazon VPC that does not have Internet access. The company has connected an AWS Direct Connection (DX) private VIF to the VPC and all communications will over the DX connection. A new requirement states that all data in transit must be encrypted between users and the VPC.
Which strategy should a Solutions Architect use to maintain consistent network performance while meeting this new requirement?
• ​
Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public virtual interface.
• ​
Create a client VPN endpoint and configure the users’ computers to use an AWS client VPN to connect to the VPC over the Internet.
• ​
Create a new private virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX private virtual interface.
• ​
Create a new Site-to-Site VPN that connects to the VPC over the internet.

A

• ​
Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public virtual interface.
(Correct)

Explanation
Running an AWS VPN connection over a DX connection provides consistent levels of throughput and encryption algorithms that protect your data. Though a private VIF is typically used to connect to a VPC, in the case of running an IPSec VPN over the top of a DX connection it is necessary to use a public VIF (please check the AWS article linked below for instructions)
CORRECT:”Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public virtual interface” is the correct answer.
INCORRECT:”Create a new private virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX private virtual interface” is incorrect. A public VIF must be used when using an IPSec VPN over a DX connection.
INCORRECT:”Create a client VPN endpoint and configure the users’ computers to use an AWS client VPN to connect to the VPC over the Internet” is incorrect. This does not maintain consistent network performance as the public internet offers variable performance.
INCORRECT:”Create a new Site-to-Site VPN that connects to the VPC over the internet” is incorrect. This does not maintain consistent network performance as the public internet offers variable performance. The DX connection should be utilized.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/create-vpn-direct-connect/
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-networking-content-delivery/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company is planning to migrate on-premises resources to AWS. The resources include over 150 virtual machines (VMs) that use around 50 TB of storage. Most VMs can be taken offline outside of business hours, however, a few are mission critical and downtime must be minimized. The company’s internet bandwidth is fully utilized and cannot currently be increased. A Solutions Architect must design a migration strategy that can be completed within the next 3 months.
Which method would fulfill these requirements?
• ​
Migrate mission-critical VMs with AWS SMS. Export the other VMs locally and transfer them to Amazon S3 using AWS Snowball. Use VM Import/Export to import the VMs into Amazon EC2.
• ​
Set up a 1 Gbps AWS Direct Connect connection. Then, provision a private virtual interface, and use AWS Server Migration Service (SMS) to migrate the VMs into Amazon EC2.
• ​
Use an AWS Storage Gateway file gateway. Mount the file gateway and synchronize the VM filesystems to cloud storage. Use the VM Import/Export to import from cloud storage to Amazon EC2.
• ​
Export the VMs locally, beginning with the most mission-critical servers first. Use Amazon S3 Transfer Acceleration to quickly upload each VM to Amazon S3 after they are exported. Use VM Import/Export to import the VMs into Amazon EC2.

A

• ​
Set up a 1 Gbps AWS Direct Connect connection. Then, provision a private virtual interface, and use AWS Server Migration Service (SMS) to migrate the VMs into Amazon EC2.
(Correct)

Explanation
The best way to avoid downtime is to provision an AWS Direct Connect connection and use AWS SMS to migrate the VMs into EC2. With support for incremental replication, AWS SMS allows fast, scalable testing of migrated servers. This can also be used to perform a final replication to synchronize the final changes before cutover.
The diagram below depicts the migration process when using AWS SMS:

CORRECT:”Set up a 1 Gbps AWS Direct Connect connection. Then, provision a private virtual interface, and use AWS Server Migration Service (SMS) to migrate the VMs into Amazon EC2” is the correct answer.
INCORRECT:”Migrate mission-critical VMs with AWS SMS. Export the other VMs locally and transfer them to Amazon S3 using AWS Snowball. Use VM Import/Export to import the VMs into Amazon EC2” is incorrect. The VMs that are exported and transported using Snowball will be offline for several days in this scenario which is not acceptable.
INCORRECT:”Use an AWS Storage Gateway file gateway. Mount the file gateway and synchronize the VM filesystems to cloud storage. Use the VM Import/Export to import from cloud storage to Amazon EC2” is incorrect. You cannot migrate VMs in this manner and you cannot mount block-based volumes and replicate the entire operating system volume using file-based storage systems.
INCORRECT:”Export the VMs locally, beginning with the most mission-critical servers first. Use Amazon S3 Transfer Acceleration to quickly upload each VM to Amazon S3 after they are exported. Use VM Import/Export to import the VMs into Amazon EC2” is incorrect. S3 has an object limit of 5 TB which could be an issue for some VMs (maybe). The key problem here is that there is no bandwidth to quickly upload these images, even using Transfer Acceleration will not help if the bottleneck is the saturated internet link at the data center.
References:
https://docs.aws.amazon.com/server-migration-service/latest/userguide/server-migration.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-networking-content-delivery/
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-migration-transfer/

From

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company is moving their IT infrastructure to the AWS Cloud and will have several Amazon VPCs within an AWSRegion. The company requires centralized and controlled egress-only internet access. The solution must be highly available and horizontally scalable. The company is expecting to grow the number of VPCs to more than fifty.
A Solutions Architect is designing the network for the new cloud deployment. Which design pattern will meet the stated requirements?
• ​
Attach each VPC to a centralized transit VPC with a VPN connection to each standalone VPC. Outbound internet traffic will be controlled by firewall appliances.
• ​
Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and attach the transit gateway.
• ​
Attach each VPC to a shared centralized VPC. Configure VPC peering between each VPC and the centralized VPC. Configure a NAT gateway in two AZs within the centralized VPC.
• ​
Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and connect the transit gateway using IPSec VPNs with BGP.

A

• ​
Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and connect the transit gateway using IPSec VPNs with BGP.
(Correct)

Atransit gatewayis a network transit hub that you can use to interconnect your virtual private clouds (VPCs) and on-premises networks. You can attach the following to a transit gateway:
- One or more VPCs
- A Connect SD-WAN/third-party network appliance
- An AWS Direct Connect gateway
- A peering connection with another transit gateway
- A VPN connection to a transit gateway
The correct answer includes a VPN attachment with BGP for an AWS Transit Gateway. This allows BGP equal-cost multipathing (ECMP) to be used which can load balance traffic across multiple EC2 instances. This is the only solution that provides the ability to horizontally scale the outbound internet traffic across multiple appliances with HA across AZs.
The following diagram depicts this architecture:

CORRECT:”Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and connect the transit gateway using IPSec VPNs with BGP” is the correct answer.
INCORRECT:”Attach each VPC to a centralized transit VPC. Use an egress VPC with firewall appliances in multiple AZs and attach the transit gateway” is incorrect. Transit Gateway is not a load balancer and will not distribute your traffic evenly across instances in the two AZs. The traffic across the Transit Gateway will stay within an AZ, if possible. Therefore, you are limited by the bandwidth capabilities of a single EC2 instance.
INCORRECT:”Attach each VPC to a shared centralized VPC. Configure VPC peering between each VPC and the centralized VPC. Configure a NAT gateway in two AZs within the centralized VPC.” is incorrect. Edge to edge routing is not supported for VPC peering so you cannot route across a VPC peering connection to VPC and then out via a NAT gateway.
INCORRECT:”Attach each VPC to a centralized transit VPC with a VPN connection to each standalone VPC. Outbound internet traffic will be controlled by firewall appliances” is incorrect. A transit VPC is a legacy design patter, AWS would prefer you to use AWS Transit Gateway for all new requirements. There is also no mention of how scaling and HA is included.
References:
https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/centralized-egress-to-internet.html
https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-networking-content-delivery/

From

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A company runs several IT services in an on-premises data center that is connected to AWS using an AWS Direct Connect (DX) connection. The service data is sensitive and the company uses an IPSec VPN over the FX connection to encrypt data. Security requirements mandate that the data cannot traverse the internet. The company wants to offer the IT services to other companies who use AWS.
Which solution will meet these requirements?
• ​
Attach an internet gateway to the VPC and ensure that network access control and security group rules allow the relevant inbound and outbound traffic.
• ​
Configure a mesh of AWS VPN CloudHub IPsec VPN connections between the customer AWS accounts and the service provider AWS account.
• ​
Create a VPC Endpoint Service that accepts HTTP or HTTPS traffic and host it behind an Application Load Balancer. Enable access to the IT services over the DX connection.
• ​
Create a VPC Endpoint Service that accepts TCP traffic and host it behind a Network Load Balancer. Enable access to the IT services over the DX connection.

A

Create a VPC Endpoint Service that accepts TCP traffic and host it behind a Network Load Balancer. Enable access to the IT services over the DX connection.
(Correct)

Explanation
The solution is to use VPC endpoint services in a service provider model. In this model a network load balancer must be created in the service provider VPC in front of the application services. Remember that NLBs can use on-premises targets. A VPC endpoint is then created that uses the NLB.
A service consumer that has been granted permissions then creates an interface endpoint to your service, optionally in each Availability Zone in which you configured your service. This is depicted in the image below:

CORRECT:”Create a VPC Endpoint Service that accepts TCP traffic and host it behind a Network Load Balancer. Enable access to the IT services over the DX connection” is the correct answer.
INCORRECT:”Create a VPC Endpoint Service that accepts HTTP or HTTPS traffic and host it behind an Application Load Balancer. Enable access to the IT services over the DX connection” is incorrect. A NLB should be used for a VPC endpoint service.
INCORRECT:”Configure a mesh of AWS VPN CloudHub IPsec VPN connections between the customer AWS accounts and the service provider AWS account” is incorrect. VPNs use the internet and the internet must be avoided. Note that the VPN used by the company runs over DX, not over the internet, so the connection can be encrypted.
INCORRECT:”Attach an internet gateway to the VPC and ensure that network access control and security group rules allow the relevant inbound and outbound traffic” is incorrect. An internet gateway is used for internet-based connectivity which should be avoided. It is not needed for a VPC endpoint service.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-service-overview.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-networking-content-delivery/

From

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A Solutions Architect must design a solution for providing private connectivity from a company’s WAN network to multiple AWS Regions. The company has offices around the world and has its main data center in New York. The company has mandated that traffic must not traverse the public internet at any time. The solution must also be highly available.
How can the Solutions Architect meet these requirements?
• ​
Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use Direct Connect Gateway to access data in other AWS Regions.

• ​
Create an AWS Direct Connect connection from the New York data center to all AWS Regions the company uses. Configure the company WAN to send traffic via the New York data center and on to the respective DX connection to access AWS.
• ​
Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use an AWS transit VPC solution to access data in other AWS Regions.
• ​
Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use inter-region VPC peering to access the data in other AWS Regions.

A

• ​
Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use Direct Connect Gateway to access data in other AWS Regions.
(Correct)
• ​

Explanation
This is a great use case for DX gateway which allows you to connect either a transit gateway, or a virtual private gateway. The DX gateway will then allow you to establish connections to multiple AWS Regions. For high availability this solution should have two DX connections from the New York data center and the WAN should then be configured through BGP to forwarded connections to AWS across the DX connections.

CORRECT:”Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use Direct Connect Gateway to access data in other AWS Regions” is the correct answer.
INCORRECT:”Create an AWS Direct Connect connection from the New York data center to all AWS Regions the company uses. Configure the company WAN to send traffic via the New York data center and on to the respective DX connection to access AWS” is incorrect. You would not want to connect multiple AWS Regions to a single data center using DX connections as this would be very expensive and is not necessary. It’s better to connect to a local Region and then use DX gateway / transit gateway for connectivity from there.
INCORRECT:”Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use inter-region VPC peering to access the data in other AWS Regions” is incorrect. Inter-region VPC peering becomes very complex to setup when you have many Regions to connect in a transitive relationship (no transitive peering).
INCORRECT:”Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use an AWS transit VPC solution to access data in other AWS Regions” is incorrect. A transit VPC is a VPC configured to perform routing to other VPCs in a hub and spoke model. This has largely been replaced by transit gateways. In this case we are using multiple Regions so the DX gateway must be used to connect across Regions.
References:
https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-intro.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-networking-content-delivery/

17
Q

A company has connected their on-premises data center to AWS using a single AWS Direct Connect (DX) connection using a private virtual interface. The company is hosting the front end for a business-critical application in an Amazon VPC. The back end is hosted on-premises and the company requires consistent, reliable, and redundant connectivity between the front end and back end of the application.
Which design would provide the MOST resilient connectivity between AWS and the on-premises data center?
• ​
Install a second DX connection from a different network carrier and attach it to the same virtual private gateway as the first DX connection.

• ​
Add an additional physical connection for the existing DX connection using the same network carrier and join the connections to a link aggregation group (LAG) on the same private virtual interface.
• ​
Create an AWS Managed VPN connection that uses the public internet and attach it to the same virtual private gateway as the DX connection.
• ​
Use multiple IPSec VPN connections to separate virtual private gateways and configure BGP to prioritize the DX connection.

A

• ​
Install a second DX connection from a different network carrier and attach it to the same virtual private gateway as the first DX connection.
(Correct)

Explanation
Another DX connection should be established from a different carrier. This will provide physical separation and redundancy for the DX connections and is preferable to using the same carrier which could result in sharing the same physical pathways. The virtual private gateway has built in redundancy so sharing a VGW is acceptable.
CORRECT:”Install a second DX connection from a different network carrier and attach it to the same virtual private gateway as the first DX connection” is the correct answer.
The diagram below depicts the most highly available and redundant configuration for Direct Connect with multiple DX locations, physical connections and data centers. Note the single points of failure that must be eliminated for maximum redundancy.

INCORRECT:”Use multiple IPSec VPN connections to separate virtual private gateways and configure BGP to prioritize the DX connection” is incorrect. Separate DX links a preferable to using the internet for better reliability as the internet can be subject to various bandwidth and latency constraints.
INCORRECT:”Add an additional physical connection for the existing DX connection using the same network carrier and join the connections to a link aggregation group (LAG) on the same private virtual interface” is incorrect. This does not provide as much redundancy as the same network carrier is used which means the physical pathways may be the same.
INCORRECT:”Create an AWS Managed VPN connection that uses the public internet and attach it to the same virtual private gateway as the DX connection” is incorrect. Using the internet does not provide the reliability that the solution requires and cost is not mentioned so another DX connection is preferable.
References:
https://aws.amazon.com/directconnect/resiliency-recommendation/
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-networking-content-delivery/