saa-c02-part-11 Flashcards
A company is designing a new web service that will run on Amazon EC2 instances behind an Elastic Load Balancer. However, many of the web service clients can only reach IP addresses whitelisted on their firewalls.
What should a solutions architect recommend to meet the clients’ needs?
- A Network Load Balancer with an associated Elastic IP address.
- An Application Load Balancer with an associated Elastic IP address
- An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address
- An EC2 instance with a public IP address running as a proxy in front of the load balancer
- A Network Load Balancer with an associated Elastic IP address
IP addresses whitelisted on their firewalls = BYOIP = trusted IPs as Elastic IP addresses (EIP) to a Network Load Balancer (NLB).
IP addresses = not route 53 alias
EIP = not ALB
A company wants to host a web application on AWS that will communicate to a database within a VPC. The application should be highly available.
What should a solutions architect recommend?
- Create two Amazon EC2 instances to host the web servers behind a load balancer, and then deploy the database on a large instance.
- Deploy a load balancer in multiple Availability Zones with an Auto Scaling group for the web servers, and then deploy Amazon RDS in multiple Availability Zones.
- Deploy a load balancer in the public subnet with an Auto Scaling group for the web servers, and then deploy the database on an Amazon EC2 instance in the private subnet.
- Deploy two web servers with an Auto Scaling group, configure a domain that points to the two web servers, and then deploy a database architecture in multiple Availability Zones.
- Deploy a load balancer in multiple Availability Zones with an Auto Scaling group for the web servers, and then deploy Amazon RDS in multiple Availability Zones.
highly available = multi AZ = 2,4
communicate to a database within a VPC = Amazon RDS = more money for amazon
A company’s packaged application dynamically creates and returns single-use text files in response to user requests. The company is using Amazon CloudFront for distribution, but wants to further reduce data transfer costs. The company cannot modify the application’s source code.
What should a solutions architect do to reduce costs?
- Use Lambda@Edge to compress the files as they are sent to users.
- Enable Amazon S3 Transfer Acceleration to reduce the response times.
- Enable caching on the CloudFront distribution to store generated files at the edge.
- Use Amazon S3 multipart uploads to move the files to Amazon S3 before returning them to users.
- Use Lambda@Edge to compress the files as they are sent to users.
reduce data transfer costs = compress the files = smaller files
single-use text files = can’t cache = not 3
A database is on an Amazon RDS MySQL 5.6 Multi-AZ DB instance that experiences highly dynamic reads. Application developers notice a significant slowdown when testing read performance from a secondary AWS Region. The developers want a solution that provides less than 1 second of read replication latency.
What should the solutions architect recommend?
- Install MySQL on Amazon EC2 in the secondary Region.
- Migrate the database to Amazon Aurora with cross-Region replicas.
- Create another RDS for MySQL read replica in the secondary Region.
- Implement Amazon ElastiCache to improve database query performance.
- Migrate the database to Amazon Aurora with cross-Region replicas.
1 second of read replication = Aurora can support less than 1 second, RDS MySQL cannot
A company is planning to deploy an Amazon RDS DB instance running Amazon Aurora. The company has a backup retention policy requirement of 90 days.
Which solution should a solutions architect recommend?
- Set the backup retention period to 90 days when creating the RDS DB instance.
- Configure RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a lifecycle policy set to delete after 90 days.
- Create an AWS Backup plan to perform a daily snapshot of the RDS database with the retention set to 90 days. Create an AWS Backup job to schedule the execution of the backup plan daily.
- Use a daily scheduled event with Amazon CloudWatch Events to execute a custom AWS Lambda function that makes a copy of the RDS automated snapshot. Purge snapshots older than 90 days.
- Configure RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a lifecycle policy set to delete after 90 days.
Aurora backups go to S3 = S3 bucket with a lifecycle policy set to delete after 90 days.
A company currently has 250 TB of backup files stored in Amazon S3 in a vendor’s proprietary format. Using a Linux-based software application provided by the vendor, the company wants to retrieve files from Amazon S3, transform the files to an industry-standard format, and re-upload them to Amazon S3. The company wants to minimize the data transfer charges associated with this conversation.
What should a solutions architect do to accomplish this?
- Install the conversion software as an Amazon S3 batch operation so the data is transformed without leaving Amazon S3.
- Install the conversion software onto an on-premises virtual machine. Perform the transformation and re-upload the files to Amazon S3 from the virtual machine.
- Use AWS Snowball Edge devices to export the data and install the conversion software onto the devices. Perform the data transformation and re-upload the files to Amazon S3 from the Snowball Edge devices.
- Launch an Amazon EC2 instance in the same Region as Amazon S3 and install the conversion software onto the instance. Perform the transformation and re-upload the files to Amazon S3 from the EC2 instance.
- Launch an Amazon EC2 instance in the same Region as Amazon S3 and install the conversion software onto the instance. Perform the transformation and re-upload the files to Amazon S3 from the EC2 instance.
minimize the data transfer charges = not snowball = in aws ETL = 1,4
software application provided by the vendor = not lambda S3 batch = 4 wins
A company is migrating a NoSQL database cluster to Amazon EC2. The database automatically replicates data to maintain at least three copies of the data. I/O throughput of the servers is the highest priority. Which instance type should a solutions architect recommend for the migration?
- Storage optimized instances with instance store
- Burstable general purpose instances with an Amazon Elastic Block Store (Amazon EBS) volume
- Memory optimized instances with Amazon Elastic Block Store (Amazon EBS) optimization enabled
- Compute optimized instances with Amazon Elastic Block Store (Amazon EBS) optimization enabled
I/O throughput of the servers is the highest priority = instance store
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage-optimized-instances.html
A company has a large Microsoft SharePoint deployment running on-premises that requires Microsoft Windows shared file storage. The company wants to migrate this workload to the AWS Cloud and is considering various storage options. The storage solution must be highly available and integrated with Active Directory for access control.
Which solution will satisfy these requirements?
- Configure Amazon EFS Amazon Elastic File System (Amazon EFS) storage and set the Active Directory domain for authentication.
- Create an SMB file share on an AWS Storage Gateway file gateway in two Availability Zones.
- Create an Amazon S3 bucket and configure Microsoft Windows Server to mount it as a volume.
- Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication.
- Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication.
Windows shared file storage = FSx
A company has a web application with sporadic usage patterns. There is heavy usage at the beginning of each month, moderate usage at the start of each week, and unpredictable usage during the week. The application consists of a web server and a MySQL database server running inside the data center. The company would like to move the application to the AWS Cloud, and needs to select a cost-effective database platform that will not require database modifications.
Which solution will meet these requirements?
- Amazon DynamoDB
- Amazon RDS for MySQL
- MySQL-compatible Amazon Aurora Serverless
- MySQL deployed on Amazon EC2 in an Auto Scaling group
- MySQL-compatible Amazon Aurora Serverless
sporadic usage patterns = ASG or Serverless = 3,4
cost-effective = 3 is cheaper
A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All application components will be deployed on the AWS infrastructure.
The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load. The application must be able to store petabytes of data.
Which combination of storage and caching should the solutions architect use?
- Amazon S3 with Amazon CloudFront
- Amazon S3 Glacier with Amazon ElastiCache
- Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
- AWS Storage Gateway with Amazon ElastiCache
- Amazon S3 with Amazon CloudFront
minimize the amount of time +S3 = CloudFront
A solutions architect is creating an application that will handle batch processing of large amounts of data. The input data will be held in Amazon S3 and the output data will be stored in a different S3 bucket. For processing, the application will transfer the data over the network between multiple Amazon EC2 instances.
What should the solutions architect do to reduce the overall data transfer costs?
- Place all the EC2 instances in an Auto Scaling group.
- Place all the EC2 instances in the same AWS Region.
- Place all the EC2 instances in the same Availability Zone.
- Place all the EC2 instances in private subnets in multiple Availability Zones.
- Place all the EC2 instances in the same Availability Zone.
reduce the overall data transfer costs = close together as possible = same AZ
“Also, be aware of inter-Availability Zones data transfer charges between Amazon EC2 instances, even within the same region. If possible, the instances in a development or test environment that need to communicate with each other should be co-located within the same Availability Zone to avoid data transfer charges. (This doesn’t apply to production workloads which will most likely need to span multiple Availability Zones for high availability.)” https://aws.amazon.com/blogs/mt/using-aws-cost-explorer-to-analyze-data-transfer-costs/
A company hosts its core network services, including directory services and DNS, in its on-premises data center. The data center is connected to the AWS Cloud using AWS Direct Connect (DX). Additional AWS accounts are planned that will require quick, cost-effective, and consistent access to these network services.
What should a solutions architect implement to meet these requirements with the LEAST amount of operational overhead?
- Create a DX connection in each new account. Route the network traffic to the on-premises servers.
- Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on-premises servers.
- Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers.
- Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.
- Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.
LEAST amount of operational overhead = AWS Transit Gateway connects VPCs and on-premises networks through a central hub
https://aws.amazon.com/transit-gateway/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc
A company operates an ecommerce website on Amazon EC2 instances behind an Application Load Balancer (ALB) in an Auto Scaling group. The site is experiencing performance issues related to a high request rate from illegitimate external systems with changing IP addresses. The security team is worried about potential DDoS attacks against the website. The company must block the illegitimate incoming requests in a way that has a minimal impact on legitimate users.
What should a solutions architect recommend?
- Deploy Amazon Inspector and associate it with the ALB.
- Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
- Deploy rules to the network ACLs associated with the ALB to block the incoming traffic.
- Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.
- Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
high request rate = WAF rate-limiting
IP addresses = WAF
A company receives structured and semi-structured data from various sources once every day. A solutions architect needs to design a solution that leverages big data processing frameworks. The data should be accessible using SQL queries and business intelligence tools.
What should the solutions architect recommend to build the MOST high-performing solution?
- Use AWS Glue to process data and Amazon S3 to store data.
- Use Amazon EMR to process data and Amazon Redshift to store data.
- Use Amazon EC2 to process data and Amazon Elastic Block Store (Amazon EBS) to store data.
- Use Amazon Kinesis Data Analytics to process data and Amazon Elastic File System (Amazon EFS) to store data.
- Use Amazon EMR to process data and Amazon Redshift to store data.
SQL queries = Redshift
big data processing = EMR
A company is hosting an election reporting website on AWS for users around the world. The website uses Amazon EC2 instances for the web and application tiers in an Auto Scaling group with Application Load Balancers. The database tier uses an Amazon RDS for MySQL database. The website is updated with election results once an hour and has historically observed hundreds of users accessing the reports.
The company is expecting a significant increase in demand because of upcoming elections in different countries. A solutions architect must improve the website’s ability to handle additional demand while minimizing the need for additional EC2 instances.
Which solution will meet these requirements?
- Launch an Amazon ElastiCache cluster to cache common database queries.
- Launch an Amazon CloudFront web distribution to cache commonly requested website content.
- Enable disk-based caching on the EC2 instances to cache commonly requested website content.
- Deploy a reverse proxy into the design using an EC2 instance with caching enabled for commonly requested website content.
- Launch an Amazon CloudFront web distribution to cache commonly requested website content.
improve the website’s ability to handle additional demand = caching = CloudFront