saa-c02-part-15 Flashcards

1
Q

A company has created a multi-tier application for its ecommerce website. The website uses an Application Load Balancer that resides in the public subnets, a web tier in the public subnets, and a MySQL cluster hosted on Amazon EC2 instances in the private subnets. The MySQL database needs to retrieve product catalog and pricing information that is hosted on the internet by a third-party provider. A solutions architect must devices a strategy that maximizes security without increasing operational overhead.
What should the solutions architect do to meet these requirements?

  1. Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance.
  2. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway.
  3. Configure an internet gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the internet gateway.
  4. Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the virtual private gateway.
A
  1. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway.

instances + hosted on the internet = NAT Gateway needed for mutiple instances

https://docs.aws.amazon.com/vpc/latest/userguide/images/nat-gateway-diagram.png

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company is backing up on-premises databases to local file server shares using the SMB protocol. The company requires immediate access to 1 week of backup files to meet recovery objectives. Recovery after a week is less likely to occur, and the company can tolerate a delay in accessing those older backup files.
What should a solutions architect do to meet these requirements with the LEAST operational effort?

  1. Deploy Amazon FSx for Windows File Server to create a file system with exposed file shares with sufficient storage to hold all the desired backups.
  2. Deploy an AWS Storage Gateway file gateway with sufficient storage to hold 1 week of backups. Point the backups to SMB shares from the file gateway.
  3. Deploy Amazon Elastic File System (Amazon EFS) to create a file system with exposed NFS shares with sufficient storage to hold all the desired backups.
  4. Continue to back up to the existing file shares. Deploy AWS Database Migration Service (AWS DMS) and define a copy task to copy backup files older than 1 week to Amazon S3, and delete the backup files from the local file store.
A
  1. Deploy an AWS Storage Gateway file gateway with sufficient storage to hold 1 week of backups. Point the backups to SMB shares from the file gateway.

backing up on-premises databases = Gateway needed = 2

NFS or SMB = File Gateway

Client access is provided via SMB and NFS, and each file is stored as an object in Amazon S3 with a one-to-one mapping.

https://aws.amazon.com/blogs/storage/back-up-your-on-premises-applications-to-the-cloud-using-aws-storage-gateway/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company has developed a microservices application. It uses a client-facing API with Amazon API Gateway and multiple internal services hosted on Amazon EC2 instances to process user requests. The API is designed to support unpredictable surges in traffic, but internal services may become overwhelmed and unresponsive for a period of time during surges. A solutions architect needs to design a more reliable solution that reduces errors when internal services become unresponsive or unavailable.

Which solution meets these requirements?

  1. Use AWS Auto Scaling to scale up internal services when there is a surge in traffic.
  2. Use different Availability Zones to host internal services. Send a notification to a system administrator when an internal service becomes unresponsive.
  3. Use an Elastic Load Balancer to distribute the traffic between internal services. Configure Amazon CloudWatch metrics to monitor traffic to internal services.
  4. Use Amazon Simple Queue Service (Amazon SQS) to store user requests as they arrive. Change the internal services to retrieve the requests from the queue for processing.
A
  1. Use Amazon Simple Queue Service (Amazon SQS) to store user requests as they arrive. Change the internal services to retrieve the requests from the queue for processing.

reliable solution that reduces errors = decoupling = SQS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company is hosting 60 TB of production-level data in an Amazon S3 bucket. A solution architect needs to bring that data on premises for quarterly audit requirements. This export of data must be encrypted while in transit. The company has low network bandwidth in place between AWS and its on-premises data center.
What should the solutions architect do to meet these requirements?

  1. Deploy AWS Migration Hub with 90-day replication windows for data transfer.
  2. Deploy an AWS Storage Gateway volume gateway on AWS. Enable a 90-day replication window to transfer the data.
  3. Deploy Amazon Elastic File System (Amazon EFS), with lifecycle policies enabled, on AWS. Use it to transfer the data.
  4. Deploy an AWS Snowball device in the on-premises data center after completing an export job request in the AWS Snowball console.
A
  1. Deploy an AWS Snowball device in the on-premises data center after completing an export job request in the AWS Snowball console.

data must be encrypted while in transit + 60 TB = snowball

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company uses Amazon S3 to store its confidential audit documents. The S3 bucket uses bucket policies to restrict access to audit team IAM user credentials according to the principle of least privilege. Company managers are worried about accidental deletion of documents in the S3 bucket and want a more secure solution.
What should a solutions architect do to secure the audit documents?

  1. Enable the versioning and MFA Delete features on the S3 bucket.
  2. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account.
  3. Add an S3 Lifecycle policy to the audit team’s IAM user accounts to deny the s3:DeleteObject action during audit dates.
  4. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS key.
A
  1. Enable the versioning and MFA Delete features on the S3 bucket.

accidental deletion of documents = versioning + MFA Delete features on the S3 bucket

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A solutions architect is designing a new API using Amazon API Gateway that will receive requests from users. The volume of requests is highly variable; several hours can pass without receiving a single request. The data processing will take place asynchronously, but should be completed within a few seconds after a request is made.

Which compute service should the solutions architect have the API invoke to deliver the requirements at the lowest cost?

  1. An AWS Glue job
  2. An AWS Lambda function
  3. A containerized service hosted in Amazon Elastic Kubernetes Service (Amazon EKS)
  4. A containerized service hosted in Amazon ECS with Amazon EC2
A
  1. An AWS Lambda function

volume of requests is highly variable + lowest cost = lambda

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company hosts its application in the AWS Cloud. The application runs on Amazon EC2 instances behind an Elastic Load Balancer in an Auto Scaling group and with an Amazon DynamoDB table. The company wants to ensure the application can be made available in another AWS Region with minimal downtime.
What should a solutions architect do to meet these requirements with the LEAST amount of downtime?

  1. Create an Auto Scaling group and a load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery Region’s load balancer.
  2. Create an AWS CloudFormation template to create EC2 instances, load balancers, and DynamoDB tables to be executed when needed. Configure DNS failover to point to the new disaster recovery Region’s load balancer.
  3. Create an AWS CloudFormation template to create EC2 instances and a load balancer to be executed when needed. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery Region’s load balancer.
  4. Create an Auto Scaling group and load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Create an Amazon CloudWatch alarm to trigger and AWS Lambda function that updates Amazon Route 53 pointing to the disaster recovery load balancer.
A
  1. Create an Auto Scaling group and a load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery Region’s load balancer.

minimal downtime = failover routing policy = 1,2,3

LEAST amount of downtime = not 2,3 because cloudformation template takes time to boot up instances

1 wins

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A business application is hosted on Amazon EC2 and uses Amazon S3 for encrypted object storage. The chief information security officer has directed that no application traffic between the two services should traverse the public internet.
Which capability should the solutions architect use to meet the compliance requirements?

  1. AWS Key Management Service (AWS KMS)
  2. VPC endpoint
  3. Private subnet
  4. Virtual private gateway
A
  1. VPC endpoint

S3 + traffic should not traverse the public internet = endpoint

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A solutions architect is designing a solution that requires frequent updates to a website that is hosted on Amazon S3 with versioning enabled. For compliance reasons, the older versions of the objects will not be accessed frequently and will need to be deleted after 2 years.
What should the solutions architect recommend to meet these requirements at the LOWEST cost?

  1. Use S3 batch operations to replace object tags. Expire the objects based on the modified tags.
  2. Configure an S3 Lifecycle policy to transition older versions of objects to S3 Glacier. Expire the objects after 2 years.
  3. Enable S3 Event Notifications on the bucket that sends older objects to the Amazon Simple Queue Service (Amazon SQS) queue for further processing.
  4. Replicate older object versions to a new bucket. Use an S3 Lifecycle policy to expire the objects in the new bucket after 2 years.
A
  1. Configure an S3 Lifecycle policy to transition older versions of objects to S3 Glacier. Expire the objects after 2 years.

deleted after 2 years + LOWEST cost= Lifecycle + glacier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company runs an application on an Amazon EC2 instance backed by Amazon Elastic Block Store (Amazon EBS). The instance needs to be available for 12 hours daily. The company wants to save costs by making the instance unavailable outside the window required for the application. However, the contents of the instance’s memory must be preserved whenever the instance is unavailable.
What should a solutions architect do to meet this requirement?

  1. Stop the instance outside the application’s availability window. Start up the instance again when required.
  2. Hibernate the instance outside the application’s availability window. Start up the instance again when required.
  3. Use Auto Scaling to scale down the instance outside the application’s availability window. Scale up the instance when required.
  4. Terminate the instance outside the application’s availability window. Launch the instance by using a preconfigured Amazon Machine Image (AMI) when required.
A
  1. Hibernate the instance outside the application’s availability window. Start up the instance again when required.

contents of the instance’s memory must be preserved whenever the instance is unavailable = Hibernate

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A solutions architect is creating a new VPC design. There are two public subnets for the load balancer, two private subnets for web servers, and two private subnets for MySQL. The web servers use only HTTPS. The solutions architect has already created a security group for the load balancer allowing port 443 from 0.0.0.0/0. Company policy requires that each resource has the least access required to still be able to perform its tasks.
Which additional configuration strategy should the solutions architect use to meet these requirements?

  1. Create a security group for the web servers and allow port 443 from 0.0.0.0/0. Create a security group for the MySQL servers and allow port 3306 from the web servers security group.
  2. Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0. Create a network ACL for the MySQL servers and allow port 3306 from the web servers security group.
  3. Create a security group for the web servers and allow port 443 from the load balancer. Create a security group for the MySQL servers and allow port 3306 from the web servers security group.
  4. Create a network ACL for the web servers and allow port 443 from the load balancer. Create a network ACL for the MySQL servers and allow port 3306 from the web servers security group.
A
  1. Create a security group for the web servers and allow port 443 from the load balancer. Create a security group for the MySQL servers and allow port 3306 from the web servers security group.

web servers use only HTTPS = security group allow port 443 needed = 1,3

from the load balancer = least privilege = 3

least access required = security groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company hosts historical weather records in Amazon S3. The records are downloaded from the company’s website by a way of a URL that resolves to a domain name. Users all over the world access this content through subscriptions. A third-party provider hosts the company’s root domain name, but the company recently migrated some of its services to Amazon Route 53. The company wants to consolidate contracts, reduce latency for users, and reduce costs related to serving the application to subscribers.
Which solution meets these requirements?

  1. Create a web distribution on Amazon CloudFront to serve the S3 content for the application. Create a CNAME record in a Route 53 hosted zone that points to the CloudFront distribution, resolving to the application’s URL domain name.
  2. Create a web distribution on Amazon CloudFront to serve the S3 content for the application. Create an ALIAS record in the Amazon Route 53 hosted zone that points to the CloudFront distribution, resolving to the application’s URL domain name.
  3. Create an A record in a Route 53 hosted zone for the application. Create a Route 53 traffic policy for the web application, and configure a geolocation rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy.
  4. Create an A record in a Route 53 hosted zone for the application. Create a Route 53 traffic policy for the web application, and configure a geoproximity rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy.
A
  1. Create a web distribution on Amazon CloudFront to serve the S3 content for the application. Create an ALIAS record in the Amazon Route 53 hosted zone that points to the CloudFront distribution, resolving to the application’s URL domain name.

reduce latency for users = CloudFront = 1,2

CNAME = root domain only so not 1 = 2 wins

Amazon Route 53 alias records provide a Route 53–specific extension to DNS functionality. Alias records let you route traffic to selected AWS resources, such as CloudFront distributions and Amazon S3 buckets. They also let you route traffic from one record in a hosted zone to another record.

Unlike a CNAME record, you can create an alias record at the top node of a DNS namespace, also known as the zone apex. For example, if you register the DNS name example.com, the zone apex is example.com. You can’t create a CNAME record for example.com, but you can create an alias record for example.com that routes traffic to www.example.com. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company owns an asynchronous API that is used to ingest user requests and, based on the request type, dispatch requests to the appropriate microservice for processing. The company is using Amazon API Gateway to deploy the API front end, and an AWS Lambda function that invokes Amazon DynamoDB to store user requests before dispatching them to the processing microservices.
The company provisioned as much DynamoDB throughput as its budget allows, but the company is still experiencing availability issues and is losing user requests.
What should a solutions architect do to address this issue without impacting existing users?

  1. Add throttling on the API Gateway with server-side throttling limits.
  2. Use DynamoDB Accelerator (DAX) and Lambda to buffer writes to DynamoDB.
  3. Create a secondary index in DynamoDB for the table with the user requests.
  4. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB.
A
  1. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB.

losing user requests = decouple = SQS

company provisioned as much DynamoDB throughput as its budget allows = don’t try to squeeze more out of DynamoDB = not DAX

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company is moving its on-premises applications to Amazon EC2 instances. However, as a result of fluctuating compute requirements, the EC2 instances must always be ready to use between 8 AM and 5 PM in specific Availability Zones.
Which EC2 instances should the company choose to run the applications?

  1. Scheduled Reserved Instances
  2. On-Demand Instances
  3. Spot Instances as part of a Spot Fleet
  4. EC2 instances in an Auto Scaling group
A

2. On-Demand Instances

fluctuating compute requirements = On-Demand Instances

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-scheduled-instances.html

We do not have any capacity for purchasing Scheduled Reserved Instances or any plans to make it available in the future. To reserve capacity, use On-Demand Capacity Reservations instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A company is launching a new application deployed on an Amazon Elastic Container Service (Amazon ECS) cluster and is using the Fargate launch type for ECS tasks. The company is monitoring CPU and memory usage because it is expecting high traffic to the application upon its launch. However, the company wants to reduce costs when utilization decreases.
What should a solutions architect recommend?

  1. Use Amazon EC2 Auto Scaling to scale at certain periods based on previous traffic patterns.
  2. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm.
  3. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.
  4. Use AWS Application Auto Scaling with target tracking policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.
A
  1. Use AWS Application Auto Scaling with target tracking policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.

reduce costs when utilization decreases = auto scale = 1,3,4

monitoring CPU and memory usage = target tracking policies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company is building an application on Amazon EC2 instances that generates temporary transactional data. The application requires access to data storage that can provide configurable and consistent IOPS.
What should a solutions architect recommend?

  1. Provision an EC2 instance with a Throughput Optimized HDD (st1) root volume and a Cold HDD (sc1) data volume.
  2. Provision an EC2 instance with a Throughput Optimized HDD (st1) volume that will serve as the root and data volume.
  3. Provision an EC2 instance with a General Purpose SSD (gp2) root volume and Provisioned IOPS SSD (io1) data volume.
  4. Provision an EC2 instance with a General Purpose SSD (gp2) root volume. Configure the application to store its data in an Amazon S3 bucket.
A
  1. Provision an EC2 instance with a General Purpose SSD (gp2) root volume and Provisioned IOPS SSD (io1) data volume.

configurable and consistent IOPS = io1

temporary transactional data = instance store

17
Q

A solutions architect needs to design a resilient solution for Windows users’ home directories. The solution must provide fault tolerance, file-level backup and recovery, and access control, based upon the company’s Active Directory.
Which storage solution meets these requirements?

  1. Configure Amazon S3 to store the users’ home directories. Join Amazon S3 to Active Directory.
  2. Configure a Multi-AZ file system with Amazon FSx for Windows File Server. Join Amazon FSx to Active Directory.
  3. Configure Amazon Elastic File System (Amazon EFS) for the users’ home directories. Configure AWS Single Sign-On with Active Directory.
  4. Configure Amazon Elastic Block Store (Amazon EBS) to store the users’ home directories. Configure AWS Single Sign-On with Active Directory.
A
  1. Configure a Multi-AZ file system with Amazon FSx for Windows File Server. Join Amazon FSx to Active Directory.

resilient solution for Windows = FSx

18
Q

A company wants to move a multi-tiered application from on premises to the AWS Cloud to improve the application’s performance. The application consists of application tiers that communicate with each other by way of RESTful services. Transactions are dropped when one tier becomes overloaded. A solutions architect must design a solution that resolves these issues and modernizes the application.
Which solution meets these requirements and is the MOST operationally efficient?

  1. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application layer. Use Amazon Simple Queue Service (Amazon SQS) as the communication layer between application services.
  2. Use Amazon CloudWatch metrics to analyze the application performance history to determine the server’s peak utilization during the performance failures. Increase the size of the application server’s Amazon EC2 instances to meet the peak requirements.
  3. Use Amazon Simple Notification Service (Amazon SNS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling group. Use Amazon CloudWatch to monitor the SNS queue length and scale up and down as required.
  4. Use Amazon Simple Queue Service (Amazon SQS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling group. Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected.
A
  1. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application layer. Use Amazon Simple Queue Service (Amazon SQS) as the communication layer between application services.

Transactions are dropped when one tier becomes overloaded = decoupling = SQS = 1,4

RESTful services = API Gateway = 1

modernizes = serverless = lambda

19
Q

A company serves a multilingual website from a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). This architecture is currently running in the us-west-1 Region but is exhibiting high request latency for users located in other parts of the world.
The website needs to serve requests quickly and efficiently regardless of a user’s location. However, the company does not want to recreate the existing architecture across multiple Regions.
How should a solutions architect accomplish this?

  1. Replace the existing architecture with a website served from an Amazon S3 bucket. Configure an Amazon CloudFront distribution with the S3 bucket as the origin.
  2. Configure an Amazon CloudFront distribution with the ALB as the origin. Set the cache behavior settings to only cache based on the Accept-Language request header.
  3. Set up Amazon API Gateway with the ALB as an integration. Configure API Gateway to use an HTTP integration type. Set up an API Gateway stage to enable the API cache.
  4. Launch an EC2 instance in each additional Region and configure NGINX to act as a cache server for that Region. Put all the instances plus the ALB behind an Amazon Route 53 record set with a geolocation routing policy.
A
  1. Configure an Amazon CloudFront distribution with the ALB as the origin. Set the cache behavior settings to only cache based on the Accept-Language request header.

serve requests quickly and efficiently regardless of a user’s location = CloudFront = 1,2

never confirmed static content = not S3 = not 1

multilingual = Accept-Language request header = 2

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html

20
Q

A software vendor is deploying a new software-as-a-service (SaaS) solution that will be utilized by many AWS users. The service is hosted in a VPC behind a Network Load Balancer. The software vendor wants to provide access to this service to users with the least amount of administrative overhead and without exposing the service to the public internet.
What should a solutions architect do to accomplish this goal?

  1. Create a peering VPC connection from each user’s VPC to the software vendor’s VPC.
  2. Deploy a transit VPC in the software vendor’s AWS account. Create a VPN connection with each user account.
  3. Connect the service in the VPC with an AWS Private Link endpoint. Have users subscribe to the endpoint.
  4. Deploy a transit VPC in the software vendor’s AWS account. Create an AWS Direct Connect connection with each user account.
A
  1. Connect the service in the VPC with an AWS Private Link endpoint. Have users subscribe to the endpoint.

without exposing the service to the public internet = endpoint = 3

each user = typically a bad answer

least amount of administrative = Private Link Endpoint

https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-service.html

1- configure peering VPC for each user, too much effort, rule out

2- VPN is public internet, rule out

4- configure AWS direct connect for each user, too much effort, rule out