Udemy Exam 3 Flashcards

1
Q

An IT company is using SQS queues for decoupling the various components of application architecture. As the consuming components need additional time to process SQS messages, the company wants to postpone the delivery of new messages to the queue for a few seconds.

As a solutions architect, which of the following solutions would you suggest to the company?

A

Use delay queues to postpone the delivery of new messages to the queue for a few seconds

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.

SQS offers two types of message queues.

  1. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery.
  2. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
  • Delay queues let you postpone the delivery of new messages to a queue for several seconds, for example, when your consumer application needs additional time to process messages.
  • If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period.
  • The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

An engineering lead is designing a VPC with public and private subnets. The VPC and subnets use IPv4 CIDR blocks. There is one public subnet and one private subnet in each of three Availability Zones (AZs) for high availability. An internet gateway is used to provide internet access for the public subnets. The private subnets require access to the internet to allow EC2 instances to download software updates.

Which of the following options represents the correct solution to set up internet access for the private subnets?

A
  • Set up three NAT gateways, one in each public subnet in each AZ.
  • Create a custom route table for each AZ that forwards non-local traffic to the NAT gateway in its AZ
  • You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances.

To create a NAT gateway

  • You must specify the public subnet in which the NAT gateway should reside.
  • You must also specify an Elastic IP address to associate with the NAT gateway when you create it.

The Elastic IP address cannot be changed after you associate it with the NAT Gateway.

  • After you’ve created a NAT gateway, you must update the route table associated with one or more of your private subnets to point internet-bound traffic to the NAT gateway.
  • This enables instances in your private subnets to communicate with the internet.
  • Each NAT gateway is created in a specific Availability Zone and implemented with redundancy in that zone.
  • If you have resources in multiple Availability Zones and they share one NAT gateway, and if the NAT gateway’s Availability Zone is down, resources in the other Availability Zones lose internet access.

To create an Availability Zone-independent architecture, create a NAT gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A leading online gaming company is migrating its flagship application to AWS Cloud for delivering its online games to users across the world. The company would like to use a Network Load Balancer (NLB) to handle millions of requests per second. The engineering team has provisioned multiple instances in a public subnet and specified these instance IDs as the targets for the NLB.

As a solutions architect, can you help the engineering team understand the correct routing mechanism for these target instances?

A

Traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance

A Network Load Balancer

  • Functions at the fourth layer of the Open Systems Interconnection (OSI) model.
  • It can handle millions of requests per second.
  • After the load balancer receives a connection request, it selects a target from the target group for the default rule.
  • It attempts to open a TCP connection to the selected target on the port specified in the listener configuration.

Request Routing and IP Addresses

  • If you specify targets using an instance ID, traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance.
  • The load balancer rewrites the destination IP address from the data packet before forwarding it to the target instance.
  • If you specify targets using IP addresses, you can route traffic to an instance using any private IP address from one or more network interfaces.
  • This enables multiple applications on an instance to use the same port.
  • Note that each network interface can have its security group.
  • The load balancer rewrites the destination IP address before forwarding it to the target.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A retail company uses AWS Cloud to manage its IT infrastructure. The company has set up “AWS Organizations” to manage several departments running their AWS accounts and using resources such as EC2 instances and RDS databases. The company wants to provide shared and centrally-managed VPCs to all departments using applications that need a high degree of interconnectivity.

As a solutions architect, which of the following options would you choose to facilitate this use-case?

A

Use VPC sharing to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations

VPC sharing (part of Resource Access Manager)

Allows multiple AWS accounts to create their application resources such as EC2 instances, RDS databases, Redshift clusters, and Lambda functions, into shared and centrally-managed Amazon Virtual Private Clouds (VPCs).

  • The account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations.
  • After a subnet is shared, the participants can view, create, modify, and delete their application resources in the subnets shared with them. Participants cannot view, modify, or delete resources that belong to other participants or the VPC owner.
  • You can share Amazon VPCs to leverage the implicit routing within a VPC for applications that require a high degree of interconnectivity and are within the same trust boundaries.
  • This reduces the number of VPCs that you create and manage while using separate accounts for billing and access control.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A retail company has connected its on-premises data center to the AWS Cloud via AWS Direct Connect. The company wants to be able to resolve DNS queries for any resources in the on-premises network from the AWS VPC and also resolve any DNS queries for resources in the AWS VPC from the on-premises network.

As a solutions architect, which of the following solutions can be combined to address the given use case? (Select two)

A
  1. Create an inbound endpoint on Route 53 Resolver and then DNS resolvers on the on-premises network can forward DNS queries to Route 53 Resolver via this endpoint
  2. Create an outbound endpoint on Route 53 Resolver and then Route 53 Resolver can conditionally forward queries to resolvers on the on-premises network via this endpoint
  • Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service.
  • Amazon Route 53 effectively connects user requests to infrastructure running in AWS – such as Amazon EC2 instances – and can also be used to route users to infrastructure outside of AWS.
  • By default, Route 53 Resolver automatically answers DNS queries for local VPC domain names for EC2 instances.
  • You can integrate DNS resolution between Resolver and DNS resolvers on your on-premises network by configuring forwarding rules.
  • To resolve any DNS queries for resources in the AWS VPC from the on-premises network, you can create an inbound endpoint on Route 53 Resolver and then DNS resolvers on the on-premises network can forward DNS queries to Route 53 Resolver via this endpoint.
  • To resolve DNS queries for any resources in the on-premises network from the AWS VPC, you can create an outbound endpoint on Route 53 Resolver and then Route 53 Resolver can conditionally forward queries to resolvers on the on-premises network via this endpoint.
  • To conditionally forward queries, you need to create Resolver rules that specify the domain names for the DNS queries that you want to forward (such as example.com) and the IP addresses of the DNS resolvers on the on-premises network that you want to forward the queries to.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The DevOps team at an IT company is provisioning a two-tier application in a VPC with a public subnet and a private subnet. The team wants to use either a NAT instance or a NAT gateway in the public subnet to enable instances in the private subnet to initiate outbound IPv4 traffic to the internet but needs some technical assistance in terms of the configuration options available for the NAT instance and the NAT gateway.

As a solutions architect, which of the following options would you identify as CORRECT? (Select three)

A
  1. NAT instance can be used as a bastion server
  2. Security Groups can be associated with a NAT instance
  3. NAT instance supports port forwarding
  • A NAT instance or a NAT Gateway can be used in a public subnet in your VPC to enable instances in the private subnet to initiate outbound IPv4 traffic to the Internet.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

An IT company hosts windows based applications on its on-premises data center. The company is looking at moving the business to the AWS Cloud. The cloud solution should offer shared storage space that multiple applications can access without a need for replication. Also, the solution should integrate with the company’s self-managed Active Directory domain.

Which of the following solutions addresses these requirements with the minimal integration effort?

A

Use Amazon FSx for Windows File Server as a shared storage solution

  • Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol.
  • It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration.
  • It offers single-AZ and multi-AZ deployment options, fully managed backups, and encryption of data at rest and in transit.
  • You can optimize cost and performance for your workload needs with SSD and HDD storage options; and you can scale storage and change the throughput performance of your file system at any time.
  • With Amazon FSx, you get highly available and durable file storage starting from $0.013 per GB-month.
  • Data deduplication enables you to optimize costs even further by removing redundant data.
  • You can increase your file system storage and scale throughput capacity at any time, making it easy to respond to changing business needs.
  • There are no upfront costs or licensing fees.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company has its application servers in the public subnet that connect to the RDS instances in the private subnet. For regular maintenance, the RDS instances need patch fixes that need to be downloaded from the internet.

Considering the company uses only IPv4 addressing and is looking for a fully managed service, which of the following would you suggest as an optimal solution?

A

Configure a NAT Gateway in the public subnet of the VPC

  • You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances.
  • To create a NAT gateway, you must specify the public subnet in which the NAT gateway should reside.
  • You must also specify an Elastic IP address to associate with the NAT gateway when you create it.
  • The Elastic IP address cannot be changed after you associate it with the NAT Gateway.
  • After you’ve created a NAT gateway, you must update the route table associated with one or more of your private subnets to point internet-bound traffic to the NAT gateway.
  • This enables instances in your private subnets to communicate with the internet.
  • If you no longer need a NAT gateway, you can delete it.
  • Deleting a NAT gateway disassociates its Elastic IP address, but does not release the address from your account.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A freelance developer has built a Python based web application. The developer would like to upload his code to AWS Cloud and have AWS handle the deployment automatically. He also wants access to the underlying operating system for further enhancements.

As a solutions architect, which of the following AWS services would you recommend for this use-case?

A

AWS Elastic Beanstalk

  • AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.
  • Simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring.
  • At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.
  • There is no additional charge for Elastic Beanstalk - you pay only for the AWS resources needed to store and run your applications.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A biotechnology company has multiple High Performance Computing (HPC) workflows that quickly and accurately process and analyze genomes for hereditary diseases. The company is looking to migrate these workflows from their on-premises infrastructure to AWS Cloud.

As a solutions architect, which of the following networking components would you recommend on the EC2 instances running these HPC workflows?

A

Elastic Fabric Adapter

  • An Elastic Fabric Adapter (EFA) is a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications.
  • It enhances the performance of inter-instance communication that is critical for scaling HPC and machine learning applications.
  • EFA devices provide all Elastic Network Adapter (ENA) devices functionalities plus a new OS bypass hardware interface that allows user-space applications to communicate directly with the hardware-provided reliable transport functionality.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company has set up “AWS Organizations” to manage several departments running their own AWS accounts. The departments operate from different countries and are spread across various AWS Regions. The company wants to set up a consistent resource provisioning process across departments so that each resource follows pre-defined configurations such as using a specific type of EC2 instances, specific IAM roles for Lambda functions, etc.

As a solutions architect, which of the following options would you recommend for this use-case?

A

Use AWS CloudFormation StackSets to deploy the same template across AWS accounts and regions

  • AWS CloudFormation StackSet extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation.
  • A stack set lets you create stacks in AWS accounts across regions by using a single AWS CloudFormation template.
  • Using an administrator account of an “AWS Organization”, you define and manage an AWS CloudFormation template, and use the template as the basis for provisioning stacks into selected target accounts of an “AWS Organization” across specified regions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A retail company has its flagship application running on a fleet of EC2 instances behind an Elastic Load Balancer (ELB). The engineering team has been seeing recurrent issues wherein the in-flight requests from the ELB to the EC2 instances are getting dropped when an instance becomes unhealthy.

Which of the following features can be used to address this issue?

A

Connection Draining

  • To ensure that an Elastic Load Balancer stops sending requests to instances that are de-registering or unhealthy while keeping the existing connections open, use connection draining.
  • This enables the load balancer to complete in-flight requests made to instances that are de-registering or unhealthy.
  • The maximum timeout value can be set between 1 and 3,600 seconds (the default is 300 seconds).
  • When the maximum time limit is reached, the load balancer forcibly closes connections to the de-registering instance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A retail company has a fleet of EC2 instances running behind an Auto Scaling group (ASG). The development team has configured two metrics that control the scale-in and scale-out policies of ASG. First one is a target tracking policy that uses a custom metric to add and remove two new instances, based on the number of SQS messages in the queue. The other is a step scaling policy that uses the CloudWatch CPUUtilization metric to launch one new instance when the existing instance exceeds 90 percent utilization for a specified length of time.

While testing, the scale-out policy criteria for both policies was met at the same time. How many new instances will be launched because of these multiple scaling policies?

A

Amazon EC2 Auto Scaling chooses the policy that provides the largest capacity, so policy with the custom metric is triggered, and two new instances will be launched by the ASG

  • A scaling policy instructs Amazon EC2 Auto Scaling to track a specific CloudWatch metric, and it defines what action to take when the associated CloudWatch alarm is in ALARM.
  • For an advanced scaling configuration, your Auto Scaling group can have more than one scaling policy.
  • For example, you can define one or more target tracking scaling policies, one or more step scaling policies, or both.
  • This provides greater flexibility to cover multiple scenarios.
  • When there are multiple policies in force at the same time, there’s a chance that each policy could instruct the Auto Scaling group to scale out (or in) at the same time.
  • For example, it’s possible that the CPUUtilization metric spikes and triggers the CloudWatch alarm at the same time that the SQS custom metric spikes and triggers the custom metric alarm.
  • When these situations occur, Amazon EC2 Auto Scaling chooses the policy that provides the largest capacity for both scale-out and scale-in.
  • Suppose, for example, that the policy for CPUUtilization launches one instance, while the policy for the SQS queue launches two instances.
  • If the scale-out criteria for both policies are met at the same time, Amazon EC2 Auto Scaling gives precedence to the SQS queue policy.
  • This results in the Auto Scaling group launching two instances.
  • The approach of giving precedence to the policy that provides the largest capacity applies even when the policies use different criteria for scaling in.
  • AWS recommends caution when using target tracking scaling policies with step scaling policies because conflicts between these policies can cause undesirable behavior.
  • For example, if the step scaling policy initiates a scale-in activity before the target tracking policy is ready to scale in, the scale-in activity will not be blocked.
  • After the scale-in activity completes, the target tracking policy could instruct the group to scale out again.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A financial services company wants to move the Windows file server clusters out of their datacenters. They are looking for cloud file storage offerings that provide full Windows compatibility. Can you identify the AWS storage services that provide highly reliable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol compatible with Windows systems? (Select two)

A

Amazon FSx for Windows File Server

File Gateway Configuration of AWS Storage Gateway

  • Amazon FSx for Windows File Server is a fully managed, highly reliable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol.
  • It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration.
  • Depending on the use case, Storage Gateway provides 3 types of storage interfaces for on-premises applications: File, Volume, and Tape.
  • The File Gateway enables you to store and retrieve objects in Amazon S3 using file protocols such as Network File System (NFS) and Server Message Block (SMB).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

An AWS Organization is using Service Control Policies (SCP) for central control over the maximum available permissions for all accounts in their organization. This allows the organization to ensure that all accounts stay within the organization’s access control guidelines.

Which of the given scenarios are correct regarding the permissions described below? (Select three)

A
  1. If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can’t perform that action
  2. SCPs affect all users and roles in attached accounts, including the root user
  3. SCPs do not affect service-linked role
  • Service control policies (SCPs) are one type of policy that can be used to manage your organization.
  • SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines.
  • In SCPs, you can restrict which AWS services, resources, and individual API actions the users and roles in each member account can access.
  • You can also define conditions for when to restrict access to AWS services, resources, and API actions.
  • These restrictions even override the administrators of member accounts in the organization.

Please note the following effects on permissions vis-a-vis the SCPs:

  • If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can’t perform that action.
  • SCPs affect all users and roles in the attached accounts, including the root user.
  • SCPs do not affect any service-linked role.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The engineering team at an e-commerce company wants to migrate from SQS Standard queues to FIFO queues with batching.

As a solutions architect, which of the following steps would you have in the migration checklist? (Select three)

A
  1. Delete the existing standard queue and recreate it as a FIFO queue
  2. Make sure that the name of the FIFO queue ends with the .fifo suffix
  3. Make sure that the throughput for the target FIFO queue does not exceed 3,000 messages per second
  • Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.
  • SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work.
  • Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

SQS offers two types of message queues.

  1. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery.
  2. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
  • By default, FIFO queues support up to 3,000 messages per second with batching, or up to 300 messages per second (300 send, receive, or delete operations per second) without batching.
  • Therefore, using batching you can meet a throughput requirement of upto 3,000 messages per second.
  • The name of a FIFO queue must end with the .fifo suffix.
  • The suffix counts towards the 80-character queue name limit. To determine whether a queue is FIFO, you can check whether the queue name ends with the suffix.
  • If you have an existing application that uses standard queues and you want to take advantage of the ordering or exactly-once processing features of FIFO queues, you need to configure the queue and your application correctly.
  • You can’t convert an existing standard queue into a FIFO queue.
  • To make the move, you must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue.
17
Q

The business analytics team at a company has been running ad-hoc queries on Oracle and PostgreSQL services on Amazon RDS to prepare daily reports for senior management. To facilitate the business analytics reporting, the engineering team now wants to continuously replicate this data and consolidate these databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift.

As a solutions architect, which of the following would you recommend as the MOST resource-efficient solution that requires the LEAST amount of development time without the need to manage the underlying infrastructure?

A

Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift

  • AWS Database Migration Service helps you migrate databases to AWS quickly and securely.
  • The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
  • With AWS Database Migration Service, you can continuously replicate your data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3.
  • You can migrate data to Amazon Redshift databases using AWS Database Migration Service.
  • Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud.
  • With an Amazon Redshift database as a target, you can migrate data from all of the other supported source databases.
  • The Amazon Redshift cluster must be in the same AWS account and the same AWS Region as the replication instance.
  • During a database migration to Amazon Redshift, AWS DMS first moves data to an Amazon S3 bucket.
  • When the files reside in an Amazon S3 bucket, AWS DMS then transfers them to the proper tables in the Amazon Redshift data warehouse.
  • AWS DMS creates the S3 bucket in the same AWS Region as the Amazon Redshift database.
  • The AWS DMS replication instance must be located in that same region.
18
Q

A social media startup uses AWS Cloud to manage its IT infrastructure. The engineering team at the startup wants to perform weekly database rollovers for a MySQL database server using a serverless cron job that typically takes about 5 minutes to execute the database rollover script written in Python. The database rollover will archive the past week’s data from the production database to keep the database small while still keeping its data accessible.

As a solutions architect, which of the following would you recommend as the MOST cost-efficient and reliable solution?

A

Schedule a weekly CloudWatch event cron expression to invoke a Lambda function that runs the database rollover job

  • AWS Lambda lets you run code without provisioning or managing servers.
  • You pay only for the compute time you consume.
  • AWS Lambda supports standard rate and cron expressions for frequencies of up to once per minute.
19
Q

An IT company is looking to move its on-premises infrastructure to AWS Cloud. The company has a portfolio of applications with a few of them using server bound licenses that are valid for the next year. To utilize the licenses, the CTO wants to use dedicated hosts for a one year term and then migrate the given instances to default tenancy thereafter.

As a solutions architect, which of the following options would you identify as CORRECT for changing the tenancy of an instance after you have launched it? (Select two)

A
  • You can change the tenancy of an instance from dedicated to host
  • You can change the tenancy of an instance from host to dedicated
  • Each EC2 instance that you launch into a VPC has a tenancy attribute.
  • By default, EC2 instances run on a shared-tenancy basis.
  • Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that’s dedicated to a single customer.
  • Dedicated Instances that belong to different AWS accounts are physically isolated at the hardware level.
  • However, Dedicated Instances may share hardware with other instances from the same AWS account that is not Dedicated Instances.
  • A Dedicated Host is also a physical server that’s dedicated to your use.
  • With a Dedicated Host, you have visibility and control over how instances are placed on the server.
20
Q

A media startup is looking at hosting their web application on AWS Cloud. The application will be accessed by users from different geographic regions of the world. The main feature of the application requires the upload and download of video files that can reach a maximum size of 10GB. The startup wants the solution to be cost-effective and scalable with the lowest possible latency for a great user experience.

As a Solutions Architect, which of the following will you suggest as an optimal solution to meet the given requirements?

A

Use Amazon S3 for hosting the web application and use S3 Transfer Acceleration to reduce the latency that geographically dispersed users might face

  • Amazon S3 Transfer Acceleration can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects.
  • Customers who have either web or mobile applications with widespread users or applications hosted far away from their S3 bucket can experience long and variable upload and download speeds over the Internet.
  • S3 Transfer Acceleration (S3TA) reduces the variability in Internet routing, congestion, and speeds that can affect transfers, and logically shortens the distance to S3 for remote applications.
  • S3TA improves transfer performance by routing traffic through Amazon CloudFront’s globally distributed Edge Locations and over AWS backbone networks, and by using network protocol optimizations.
  • For applications interacting with your S3 buckets through the S3 API from outside of your bucket’s region, S3TA helps avoid the variability in Internet routing and congestion.
  • It does this by routing your uploads and downloads over the AWS global network infrastructure, so you get the benefit of AWS network optimizations.
21
Q

A legacy application is built using a tightly-coupled monolithic architecture. Due to a sharp increase in the number of users, the application performance has degraded. The company now wants to decouple the architecture and adopt AWS microservices architecture. Some of these microservices need to handle fast running processes whereas other microservices need to handle slower processes.

Which of these options would you identify as the right way of connecting these microservices?

A

Configure Amazon SQS queue to decouple microservices running faster processes from the microservices running slower ones

  • Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.
  • SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware and empowers developers to focus on differentiating work.
  • Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.
  • Use Amazon SQS to transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be available.
  • SQS lets you decouple application components so that they run and fail independently, increasing the overall fault tolerance of the system.
  • Multiple copies of every message are stored redundantly across multiple availability zones so that they are available whenever needed.
  • Being able to store the messages and replay them is a very important feature in decoupling the system architecture, as is needed in the current use case.
22
Q

The engineering team at a social media company wants to use Amazon CloudWatch alarms to automatically recover EC2 instances if they become impaired. The team has hired you as a solutions architect to provide subject matter expertise.

As a solutions architect, which of the following statements would you identify as CORRECT regarding this automatic recovery process? (Select two)

A
  1. A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata
  2. If your instance has a public IPv4 address, it retains the public IPv4 address after recovery
  • You can create an Amazon CloudWatch alarm to automatically recover the Amazon EC2 instance if it becomes impaired due to an underlying hardware failure or a problem that requires AWS involvement to repair.
  • Terminated instances cannot be recovered.
  • A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata.
  • If the impaired instance is in a placement group, the recovered instance runs in the placement group.
  • If your instance has a public IPv4 address, it retains the public IPv4 address after recovery.
  • During instance recovery, the instance is migrated during an instance reboot, and any data that is in-memory is lost.
23
Q

A retail organization is moving some of its on-premises data to AWS Cloud. The DevOps team at the organization has set up an AWS Managed IPSec VPN Connection between their remote on-premises network and their Amazon VPC over the internet.

Which of the following represents the correct configuration for the IPSec VPN Connection?

A

Create a Virtual Private Gateway on the AWS side of the VPN and a Customer Gateway on the on-premises side of the VPN

  • Amazon VPC provides the facility to create an IPsec VPN connection (also known as site-to-site VPN) between remote customer networks and their Amazon VPC over the internet.

The following are the key concepts for a site-to-site VPN:

  • Virtual private gateway: A Virtual Private Gateway (also known as a VPN Gateway) is the endpoint on the AWS VPC side of your VPN connection.
  • VPN connection: A secure connection between your on-premises equipment and your VPCs.
  • VPN tunnel: An encrypted link where data can pass from the customer network to or from AWS.
  • Customer Gateway: An AWS resource that provides information to AWS about your Customer Gateway device.
  • Customer Gateway device: A physical device or software application on the customer side of the Site-to-Site VPN connection.
24
Q

A leading bank has moved its IT infrastructure to AWS Cloud and they have been using Amazon EC2 Auto Scaling for their web servers. This has helped them deal with traffic spikes effectively. But, their relational database has now become a bottleneck and they urgently need a fully managed auto scaling solution for their relational database to address any unpredictable changes in the traffic.

Can you identify the AWS service that is best suited for this use-case?

A

Amazon Aurora Serverless

  • Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible and PostgreSQL-compatible editions), where the database will automatically start-up, shut down, and scale capacity up or down based on your application’s needs.
  • It enables you to run your database in the cloud without managing any database instances.
  • It’s a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads.
  • You pay on a per-second basis for the database capacity you use when the database is active and migrate between standard and serverless configurations with a few clicks in the Amazon RDS Management Console.
25
Q

A financial services company has recently migrated from on-premises infrastructure to AWS Cloud. The DevOps team wants to implement a solution that allows all resource configurations to be reviewed and make sure that they meet compliance guidelines. Also, the solution should be able to offer the capability to look into the resource configuration history across the application stack.

As a solutions architect, which of the following solutions would you recommend to the team?

A

“Use AWS Config to review resource configurations to meet compliance guidelines and maintain a history of resource configuration changes”

  • AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources.
  • With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines.
  • You can use Config to answer questions such as - “What did my AWS resource look like at xyz point in time?”.
26
Q

An e-commerce company is using an Elastic Load Balancer for its fleet of EC2 instances spread across two Availability Zones, with one instance as a target in Availability Zone A and four instances as targets in Availability Zone B. The company is doing benchmarking for server performance when cross-zone load balancing is enabled compared to the case when cross-zone load balancing is disabled.

As a solutions architect, which of the following traffic distribution outcomes would you identify as correct?

A
  • With cross-zone load balancing enabled, one instance in Availability Zone A receives 20% traffic and four instances in Availability Zone B receive 20% traffic each.
  • With cross-zone load balancing disabled, one instance in Availability Zone A receives 50% traffic and four instances in Availability Zone B receive 12.5% traffic each
  • The nodes for your load balancer distribute requests from clients to registered targets.
  • When cross-zone load balancing is enabled, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones.
  • Therefore, one instance in Availability Zone A receives 20% traffic and four instances in Availability Zone B receive 20% traffic each.
  • When cross-zone load balancing is disabled, each load balancer node distributes traffic only across the registered targets in its Availability Zone.
  • Therefore, one instance in Availability Zone A receives 50% traffic and four instances in Availability Zone B receive 12.5% traffic each.
27
Q

An e-commerce company runs its web application on EC2 instances in an Auto Scaling group and it’s configured to handle consumer orders in an SQS queue for downstream processing. The DevOps team has observed that the performance of the application goes down in case of a sudden spike in orders received.

As a solutions architect, which of the following solutions would you recommend to address this use-case?

A

Use a target tracking scaling policy based on a custom Amazon SQS queue metric

  • If you use a target tracking scaling policy based on a custom Amazon SQS queue metric, dynamic scaling can adjust to the demand curve of your application more effectively.
  • You may use an existing CloudWatch Amazon SQS metric like ApproximateNumberOfMessagesVisible for target tracking but you could still face an issue so that the number of messages in the queue might not change proportionally to the size of the Auto Scaling group that processes messages from the queue.
  • The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain.
  • To calculate your backlog per instance, divide the ApproximateNumberOfMessages queue attribute by the number of instances in the InService state for the Auto Scaling group.
  • Then set a target value for the Acceptable backlog per instance.

To illustrate with an example, let’s say that the current ApproximateNumberOfMessages is 1500 and the fleet’s running capacity is 10. If the average processing time is 0.1 seconds for each message and the longest acceptable latency is 10 seconds, then the acceptable backlog per instance is 10 / 0.1, which equals 100. This means that 100 is the target value for your target tracking policy. If the backlog per instance is currently at 150 (1500 / 10), your fleet scales out, and it scales out by five instances to maintain proportion to the target value.

28
Q

A video conferencing application is hosted on a fleet of EC2 instances which are part of an Auto Scaling group (ASG). The ASG uses a Launch Configuration (LC1) with “dedicated” instance placement tenancy but the VPC (V1) used by the Launch Configuration LC1 has the instance tenancy set to default. Later the DevOps team creates a new Launch Configuration (LC2) with “default” instance placement tenancy but the VPC (V2) used by the Launch Configuration LC2 has the instance tenancy set to dedicated.

Which of the following is correct regarding the instances launched via Launch Configuration LC1 and Launch Configuration LC2?

A

The instances launched by both Launch Configuration LC1 and Launch Configuration LC2 will have dedicated instance tenancy

  • A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances.
  • When you create a launch configuration, you specify information for the instances.
  • Include the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping.
  • If you’ve launched an EC2 instance before, you specified the same information to launch the instance.
  • When you create a launch configuration, the default value for the instance placement tenancy is null and the instance tenancy is controlled by the tenancy attribute of the VPC.
  • If you set the Launch Configuration Tenancy to default and the VPC Tenancy is set to dedicated, then the instances have dedicated tenancy.
  • If you set the Launch Configuration Tenancy to dedicated and the VPC Tenancy is set to default, then again the instances have dedicated tenancy.
29
Q

The development team at a retail company wants to optimize the cost of EC2 instances. The team wants to move certain nightly batch jobs to spot instances. The team has hired you as a solutions architect to provide the initial guidance.

Which of the following would you identify as CORRECT regarding the capabilities of spot instances? (Select three)

A
  1. If a spot request is persistent, then it is opened again after your Spot Instance is interrupted
  2. Spot blocks are designed not to be interrupted
  3. When you cancel an active spot request, it does not terminate the associated instance
  • A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price.
  • Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can lower your Amazon EC2 costs significantly.
  • The hourly price for a Spot Instance is called a Spot price.
  • The Spot price of each instance type in each Availability Zone is set by Amazon EC2 and adjusted gradually based on the long-term supply of and demand for Spot Instances.
  • A Spot Instance request is either one-time or persistent.
  • If the spot request is persistent, the request is opened again after your Spot Instance is interrupted.
  • If the request is persistent and you stop your Spot Instance, the request only opens after you start your Spot Instance.

Therefore the option - “If a spot request is persistent, then it is opened again after your Spot Instance is interrupted” - is correct.

  • Spot Instances with a defined duration (also known as Spot blocks) are designed not to be interrupted and will run continuously for the duration you select.
  • You can use a duration of 1, 2, 3, 4, 5, or 6 hours.
  • In rare situations, Spot blocks may be interrupted due to Amazon EC2 capacity needs.

Therefore, the option - “Spot blocks are designed not to be interrupted” - is correct.

  • If your Spot Instance request is active and has an associated running Spot Instance, or your Spot Instance request is disabled and has an associated stopped Spot Instance, canceling the request does not terminate the instance; you must terminate the running Spot Instance manually.
  • Moreover, to cancel a persistent Spot request and terminate its Spot Instances, you must cancel the Spot request first and then terminate the Spot Instances.

Therefore, the option - “When you cancel an active spot request, it does not terminate the associated instance” - is correct.

30
Q

A data analytics company is using SQS queues for decoupling the various processes of an application workflow. The company wants to postpone the delivery of certain messages to the queue by one minute while all other messages need to be delivered immediately to the queue.

As a solutions architect, which of the following solutions would you suggest to the company?

A

Use message timers to postpone the delivery of certain messages to the queue by one minute

  • You can use message timers to set an initial invisibility period for a message added to a queue.
  • So, if you send a message with a 60-second timer, the message isn’t visible to consumers for its first 60 seconds in the queue.
  • The default (minimum) delay for a message is 0 seconds.
  • The maximum is 15 minutes.
  • Therefore, you should use message timers to postpone the delivery of certain messages to the queue by one minute.
31
Q

A global pharmaceutical company wants to move most of the on-premises data into Amazon S3, Amazon EFS, and Amazon FSx for Windows File Server easily, quickly, and cost-effectively.

As a solutions architect, which of the following solutions would you recommend as the BEST fit to automate and accelerate online data transfers to these AWS storage services?

A

Use AWS DataSync to automate and accelerate online data transfers to the given AWS storage services

  • AWS DataSync is an online data transfer service that simplifies, automates, and accelerates copying large amounts of data to and from AWS storage services over the internet or AWS Direct Connect.
  • AWS DataSync fully automates and accelerates moving large active datasets to AWS, up to 10 times faster than command-line tools.
  • It is natively integrated with Amazon S3, Amazon EFS, Amazon FSx for Windows File Server, Amazon CloudWatch, and AWS CloudTrail, which provides seamless and secure access to your storage services, as well as detailed monitoring of the transfer.
  • DataSync uses a purpose-built network protocol and scale-out architecture to transfer data.
  • A single DataSync agent is capable of saturating a 10 Gbps network link.
  • DataSync fully automates the data transfer.
  • It comes with retry and network resiliency mechanisms, network optimizations, built-in task scheduling, monitoring via the DataSync API and Console, and CloudWatch metrics, events, and logs that provide granular visibility into the transfer process.
  • DataSync performs data integrity verification both during the transfer and at the end of the transfer.
32
Q

Your application is hosted by a provider on yourapp.provider.com. You would like to have your users access your application using www.your-domain.com, which you own and manage under Route 53.

What Route 53 record should you create?

A

Create a CNAME record

  • A CNAME record maps DNS queries for the name of the current record, such as acme.example.com, to another domain (example.com or example.net) or subdomain (acme.example.com or zenith.example.org).
  • CNAME records can be used to map one domain name to another.
  • Although you should keep in mind that the DNS protocol does not allow you to create a CNAME record for the top node of a DNS namespace, also known as the zone apex.

For example, if you register the DNS name example.com, the zone apex is example.com. You cannot create a CNAME record for example.com, but you can create CNAME records for www.example.com, newproduct.example.com, and so on.

33
Q

A company is looking for an orchestration solution to manage a workflow that uses AWS Glue and Amazon Lambda to process data on its S3 based data lake.

As a solutions architect, which of the following AWS services involves the LEAST development effort for this use-case?

A

AWS Step Functions

  • AWS Step Functions lets you coordinate and orchestrate multiple AWS services such as AWS Lambda and AWS Glue into serverless workflows.
  • Workflows are made up of a series of steps, with the output of one step acting as input into the next.
  • A Step Function automatically triggers and tracks each step, and retries when there are errors, so your application executes in order and as expected.
  • The Step Function can ensure that the Glue ETL job and the lambda functions execute in order and complete successfully as per the workflow defined in the given use-case.
34
Q

The DevOps team at an IT company has created a custom VPC (V1) and attached an Internet Gateway (I1) to the VPC. The team has also created a subnet (S1) in this custom VPC and added a route to this subnet’s route table (R1) that directs internet-bound traffic to the Internet Gateway. Now the team launches an EC2 instance (E1) in the subnet S1 and assigns a public IPv4 address to this instance. Next the team also launches a NAT instance (N1) in the subnet S1.

Under the given infrastructure setup, which of the following entities is doing the Network Address Translation for the EC2 instance E1?

A

Internet Gateway (I1)

An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet.

An Internet Gateway serves two purposes:

  1. To provide a target in your VPC route tables for internet-routable traffic and
  2. To perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.
  • Therefore, for instance E1, the Network Address Translation is done by Internet Gateway I1.
  • Additionally, an Internet Gateway supports IPv4 and IPv6 traffic.
  • It does not cause availability risks or bandwidth constraints on your network traffic.

To enable access to or from the internet for instances in a subnet in a VPC, you must do the following:

  • Attach an Internet gateway to your VPC.
  • Add a route to your subnet’s route table that directs internet-bound traffic to the internet gateway.
  • If a subnet is associated with a route table that has a route to an internet gateway, it’s known as a public subnet.
  • If a subnet is associated with a route table that does not have a route to an internet gateway, it’s known as a private subnet.
  • Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
  • Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance.
35
Q

An e-commerce company uses Microsoft Active Directory to provide users and groups with access to resources on the on-premises infrastructure. The company has extended its IT infrastructure to AWS in the form of a hybrid cloud. The engineering team at the company wants to run directory-aware workloads on AWS for a SQL Server-based application. The team also wants to configure a trust relationship to enable single sign-on (SSO) for its users to access resources in either domain.

As a solutions architect, which of the following AWS services would you recommend for this use-case?

A

AWS Managed Microsoft AD

  • AWS Directory Service provides multiple ways to use Amazon Cloud Directory and Microsoft Active Directory (AD) with other AWS services.
  • AWS Directory Service for Microsoft Active Directory (aka AWS Managed Microsoft AD) is powered by an actual Microsoft Windows Server Active Directory (AD), managed by AWS.
  • With AWS Managed Microsoft AD, you can run directory-aware workloads in the AWS Cloud such as SQL Server-based applications.
  • You can also configure a trust relationship between AWS Managed Microsoft AD in the AWS Cloud and your existing on-premises Microsoft Active Directory, providing users and groups with access to resources in either domain, using single sign-on (SSO).
36
Q

A company has a hybrid cloud structure for its on-premises data center and AWS Cloud infrastructure. The company wants to build a web log archival solution such that only the most frequently accessed logs are available as cached data locally while backing up all logs on Amazon S3.

As a solutions architect, which of the following solutions would you recommend for this use-case?

A

Use AWS Volume Gateway - Cached Volume - to store the most frequently accessed logs locally for low-latency access while storing the full volume with all logs in its Amazon S3 service bucket

  • AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage.

The service provides three different types of gateways

  1. Tape Gateway
  2. File Gateway
  3. Volume Gateway
  • That seamlessly connect on-premises applications to cloud storage, caching data locally for low-latency access. With cached volumes, the AWS Volume Gateway stores the full volume in its Amazon S3 service bucket, and just the recently accessed data is retained in the gateway’s local cache for low-latency access.