saa-c02-part-01 Flashcards
A solutions architect is designing a solution where users will be directed to a backup static error page if the primary website is unavailable. The primary website’s DNS records are hosted in Amazon Route 53 where their domain is pointing to an Application Load Balancer (ALB).
Which configuration should the solutions architect use to meet the company’s needs while minimizing changes and infrastructure overhead?
- Point a Route 53 alias record to an Amazon CloudFront distribution with the ALB as one of its origins. Then, create custom error pages for the distribution.
- Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page hosted within an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.
- Update the Route 53 record to use a latency-based routing policy. Add the backup static error page hosted within an Amazon S3 bucket to the record so the traffic is sent to the most responsive endpoints.
- Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance hosting a static error page as endpoints. Route 53 will only send requests to the instance if the health checks fail for the ALB.
- Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page hosted within an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.
backup static error page = active-passive failover
A solutions architect is designing a high performance computing (HPC) workload on Amazon EC2. The EC2 instances need to communicate to each other frequently and require network performance with low latency and high throughput.
Which EC2 configuration meets these requirements?
- Launch the EC2 instances in a cluster placement group in one Availability Zone.
- Launch the EC2 instances in a spread placement group in one Availability Zone.
- Launch the EC2 instances in an Auto Scaling group in two Regions and peer the VPCs.
- Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones.
- Launch the EC2 instances in a cluster placement group in one Availability Zone.
high performance computing (HPC) = cluster placement group
A company wants to host a scalable web application on AWS. The application will be accessed by users from different geographic regions of the world. Application users will be able to download and upload unique data up to gigabytes in size. The development team wants a cost-effective solution to minimize upload and download latency and maximize performance.
What should a solutions architect do to accomplish this?
- Use Amazon S3 with Transfer Acceleration to host the application.
- Use Amazon S3 with CacheControl headers to host the application.
- Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
- Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.
- Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
different regions = edge caching = CloudFront
cost-effective solution= CloudFront
minimize latency = CloudFront
maximize performance = CloudFront
A company is migrating from an on-premises infrastructure to the AWS Cloud. One of the company’s applications stores files on a Windows file server farm that uses Distributed File System Replication (DFSR) to keep data in sync. A solutions architect needs to replace the file server farm.
Which service should the solutions architect use?
- Amazon Elastic File System (Amazon EFS)
- Amazon FSx
- Amazon S3
- AWS Storage Gateway
- Amazon FSx
Windows file server = FSx
A company has a legacy application that processes data in two parts. The second part of the process takes longer than the first, so the company has decided to rewrite the application as two microservices running on Amazon ECS that can scale independently.
How should a solutions architect integrate the microservices?
- Implement code in microservice 1 to send data to an Amazon S3 bucket. Use S3 event notifications to invoke microservice 2.
- Implement code in microservice 1 to publish data to an Amazon SNS topic. Implement code in microservice 2 to subscribe to this topic.
- Implement code in microservice 1 to send data to Amazon Kinesis Data Firehose. Implement code in microservice 2 to read from Kinesis Data Firehose.
- Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue.
- Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue.
processes data in two parts = decouple = SQS queue
scale independently = decouple = SQS queue
A company captures clickstream data from multiple websites and analyzes it using batch processing. The data is loaded nightly into Amazon Redshift and is consumed by business analysts. The company wants to move towards near-real-time data processing for timely insights. The solution should process the streaming data with minimal effort and operational overhead.
Which combination of AWS services are MOST cost-effective for this solution? (Choose two.)
- Amazon EC2
- AWS Lambda
- Amazon Kinesis Data Streams
- Amazon Kinesis Data Firehose
- Amazon Kinesis Data Analytics
- AWS Lambda
- Amazon Kinesis Data Firehose
near-real-time data processing = Kinesis *
batch = lambda
minimal effort and operational overhead = Amazon Kinesis Data Firehose is the easiest way to load streaming data into AWS.
Loads to Amazon Redshift = Amazon Kinesis Data Firehose
A company’s application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. On the first day of every month at midnight, the application becomes much slower when the month-end financial calculation batch executes. This causes the CPU utilization of the EC2 instances to immediately peak to 100%, which disrupts the application.
What should a solutions architect recommend to ensure the application is able to handle the workload and avoid downtime?
- Configure an Amazon CloudFront distribution in front of the ALB.
- Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization.
- Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.
- Configure Amazon ElastiCache to remove some of the workload from the EC2 instances.
- Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.
On the first day of every month at midnight = predictable scaling = scheduled scaling policy
Amazon EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones and use an Amazon Aurora database. A solutions architect needs to make the application more resilient to periodic increases in request rates.
Which architecture should the solutions architect implement? (Choose two.)
- Add AWS Shield.
- Add Aurora Replica.
- Add AWS Direct Connect.
- Add AWS Global Accelerator.
- Add an Amazon CloudFront distribution in front of the Application Load Balancer.
- Add AWS Global Accelerator.
- Add an Amazon CloudFront distribution in front of the Application Load Balancer.
periodic increases in request rates = edge caching needed = CloudFront
periodic increases in request rates = edge caching needed = Global Accelerator
An application running on AWS uses an Amazon Aurora Multi-AZ deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database.
What should the solutions architect do to separate the read requests from the write requests?
- Enable read-through caching on the Amazon Aurora database.
- Update the application to read from the Multi-AZ standby instance.
- Create a read replica and modify the application to use the appropriate endpoint.
- Create a second Amazon Aurora database and link it to the primary database as a read replica.
- Create a read replica and modify the application to use the appropriate endpoint.
reads are causing high I/O and adding latency = read replica
A recently acquired company is required to build its own infrastructure on AWS and migrate multiple applications to the cloud within a month. Each application has approximately 50 TB of data to be transferred. After the migration is complete, this company and its parent company will both require secure network connectivity with consistent throughput from their data centers to the applications. A solutions architect must ensure one-time data migration and ongoing network connectivity.
Which solution will meet these requirements?
- AWS Direct Connect for both the initial transfer and ongoing connectivity.
- AWS Site-to-Site VPN for both the initial transfer and ongoing connectivity.
- AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity.
- AWS Snowball for the initial transfer and AWS Site-to-Site VPN for ongoing connectivity.
- AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity.
40-80TB = Snowball
ongoing network connectivity = Direct Connect
A company serves content to its subscribers across the world using an application running on AWS. The application has several Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB). Due to a recent change in copyright restrictions, the chief information officer (CIO) wants to block access for certain countries.
Which action will meet these requirements?
- Modify the ALB security group to deny incoming traffic from blocked countries.
- Modify the security group for EC2 instances to deny incoming traffic from blocked countries.
- Use Amazon CloudFront to serve the application and deny access to blocked countries.
- Use ALB listener rules to return access denied responses to incoming traffic from blocked countries
- Use Amazon CloudFront to serve the application and deny access to blocked countries.
block access for certain countries = geo restrictions = CloudFront
A company is creating a new application that will store a large amount of data. The data will be analyzed hourly and modified by several Amazon EC2 Linux instances that are deployed across multiple Availability Zones. The application team believes the amount of space needed will continue to grow for the next 6 months.
Which set of actions should a solutions architect take to support these needs?
- Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the application instances.
- Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the application instances.
- Store the data in Amazon S3 Glacier. Update the S3 Glacier vault policy to allow access to the application instances.
- Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume shared between the application instances.
Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the application instances.
multiple Availability Zones = EFS
Linux = EFS
data modified = not static = not S3
single AZ = EBS
A company is migrating a three-tier application to AWS. The application requires a MySQL database. In the past, the application users reported poor application performance when creating new entries. These performance issues were caused by users generating different real-time reports from the application during working hours.
Which solution will improve the performance of the application when it is moved to AWS?
- Import the data into an Amazon DynamoDB table with provisioned capacity. Refactor the application to use DynamoDB for reports.
- Create the database on a compute optimized Amazon EC2 instance. Ensure compute resources exceed the on-premises database.
- Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the application to use the reader endpoint for reports.
- Create an Amazon Aurora MySQL Multi-AZ DB cluster. Configure the application to use the backup instance of the cluster as an endpoint for the reports.
- Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the application to use the reader endpoint for reports.
Look for answers that look similar. It is going to be one of those.
real-time reports = read operations = read replicas needed
MySQL = RDS = not DynamoDB = Aurora compatible
A solutions architect is deploying a distributed database on multiple Amazon EC2 instances. The database stores all data on multiple instances so it can withstand the loss of an instance. The database requires block storage with latency and throughput to support several million transactions per second per server.
Which storage solution should the solutions architect use?
- EBS Amazon Elastic Block Store (Amazon EBS)
- Amazon EC2 instance store
- Amazon Elastic File System (Amazon EFS)
- Amazon S3
- Amazon EC2 instance store
block storage = instance store
Latency and throughput = instance store
Organizers for a global event want to put daily reports online as static HTML pages. The pages are expected to generate millions of views from users around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution.
Which action should the solutions architect take to accomplish this?
- Generate presigned URLs for the files.
- Use cross-Region replication to all Regions.
- Use the geoproximity feature of Amazon Route 53.
- Use Amazon CloudFront with the S3 bucket as its origin.
- Use Amazon CloudFront with the S3 bucket as its origin.
users around the world = edge caching = CloudFront
efficient = CloudFront
S3 static pages = origin + CloudFront