saa-c02-part-10 Flashcards
A company is hosting multiple websites for several lines of business under its registered parent domain. Users accessing these websites will be routed to appropriate backend Amazon EC2 instances based on the subdomain. The websites host static webpages, images, and server-side scripts like PHP and JavaScript.
Some of the websites experience peak access during the first two hours of business with constant usage throughout the rest of the day. A solutions architect needs to design a solution that will automatically adjust capacity to these traffic patterns while keeping costs low.
Which combination of AWS services or features will meet these requirements? (Choose two.)
- AWS Batch
- Network Load Balancer
- Application Load Balancer
- Amazon EC2 Auto Scaling
- Amazon S3 website hosting
- Amazon EC2 Auto Scaling
- Amazon S3 website hosting
static webpages = S3 = 5
automatically adjust capacity = auto scaling = 4
A company uses an Amazon S3 bucket to store static images for its website. The company configured permissions to allow access to Amazon S3 objects by privileged users only.
What should a solutions architect do to protect against data loss? (Choose two.)
- Enable versioning on the S3 bucket.
- Enable access logging on the S3 bucket.
- Enable server-side encryption on the S3 bucket.
- Configure an S3 lifecycle rule to transition objects to Amazon S3 Glacier.
- Use MFA Delete to require multi-factor authentication to delete an object.
- Enable versioning on the S3 bucket.
- Use MFA Delete to require multi-factor authentication to delete an object.
protect against data loss S3 = versioning
data loss = Use MFA Delete
An operations team has a standard that states IAM policies should not be applied directly to users. Some new team members have not been following this standard. The operations manager needs a way to easily identify the users with attached policies.
What should a solutions architect do to accomplish this?
- Monitor using AWS CloudTrail.
- Create an AWS Config rule to run daily.
- Publish IAM user changes to Amazon SNS.
- Run AWS Lambda when a user is modified.
A company wants to use an AWS Region as a disaster recovery location for its on-premises infrastructure. The company has 10 TB of existing data, and the on-premise data center has a 1 Gbps internet connection. A solutions architect must find a solution so the company can have its existing data on AWS in 72 hours without transmitting it using an unencrypted channel.
Which solution should the solutions architect select?
- Send the initial 10 TB of data to AWS using FTP.
- Send the initial 10 TB of data to AWS using AWS Snowball.
- Establish a VPN connection between Amazon VPC and the company’s data center.
- Establish an AWS Direct Connect connection between Amazon VPC and the company’s data center.
- Establish a VPN connection between Amazon VPC and the company’s data center.
1 Gbps for 10 TB = 22 hours < 72 hours = just need encryption = VPN
A company is building applications in containers. The company wants to migrate its on-premises development and operations services from its on-premises data center to AWS. Management states that production system must be cloud agnostic and use the same configuration and administrator tools across production systems. A solutions architect needs to design a managed solution that will align open-source software.
Which solution meets these requirements?
- Launch the containers on Amazon EC2 with EC2 instance worker nodes.
- Launch the containers on Amazon Elastic Kubernetes Service (Amazon EKS) and EKS workers nodes.
- Launch the containers on Amazon Elastic Containers service (Amazon ECS) with AWS Fargate instances.
- Launch the containers on Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 instance worker nodes.
- Launch the containers on Amazon Elastic Kubernetes Service (Amazon EKS) and EKS workers nodes.
cloud agnostic = EKS
A company hosts its website on AWS. To address the highly variable demand, the company has implemented Amazon EC2 Auto Scaling. Management is concerned that the company is over-provisioning its infrastructure, especially at the front end of the three-tier application. A solutions architect needs to ensure costs are optimized without impacting performance.
What should the solutions architect do to accomplish this?
- Use Auto Scaling with Reserved Instances.
- Use Auto Scaling with a scheduled scaling policy.
- Use Auto Scaling with the suspend-resume feature.
- Use Auto Scaling with a target tracking scaling policy.
- Use Auto Scaling with a target tracking scaling policy.
costs are optimized = target tracking
A solutions architect is performing a security review of a recently migrated workload. The workload is a web application that consists of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The solutions architect must improve the security posture and minimize the impact of a DDoS attack on resources.
Which solution is MOST effective?
- Configure an AWS WAF ACL with rate-based rules. Create an Amazon CloudFront distribution that points to the Application Load Balancer. Enable the WAF ACL on the CloudFront distribution.
- Create a custom AWS Lambda function that adds identified attacks into a common vulnerability pool to capture a potential DDoS attack. Use the identified information to modify a network ACL to block access.
- Enable VPC Flow Logs and store then in Amazon S3. Create a custom AWS Lambda functions that parses the logs looking for a DDoS attack. Modify a network ACL to block identified source IP addresses.
- Enable Amazon GuardDuty and configure findings written to Amazon CloudWatch. Create an event with CloudWatch Events for DDoS alerts that triggers Amazon Simple Notification Service (Amazon SNS). Have Amazon SNS invoke a custom AWS Lambda function that parses the logs, looking for a DDoS attack. Modify a network ACL to block identified source IP addresses.
- Configure an AWS WAF ACL with rate-based rules. Create an Amazon CloudFront distribution that points to the Application Load Balancer. Enable the WAF ACL on the CloudFront distribution.
DDoS = CloudFront
MOST effective = dont over engineer
AWS WAF is a web application firewall that helps detect and mitigate web application layer DDoS attacks by inspecting traffic inline
A company has multiple AWS accounts for various departments. One of the departments wants to share an Amazon S3 bucket with all other department.
Which solution will require the LEAST amount of effort?
- Enable cross-account S3 replication for the bucket.
- Create a pre-signed URL for the bucket and share it with other departments.
- Set the S3 bucket policy to allow cross-account access to other departments.
- Create IAM users for each of the departments and configure a read-only IAM policy.
- Set the S3 bucket policy to allow cross-account access to other departments.
LEAST amount of effort = cross-account access
pre-signed URL = temporary
A company needs to share an Amazon S3 bucket with an external vendor. The bucket owner must be able to access all objects.
Which action should be taken to share the S3 bucket?
- Update the bucket to be a Requester Pays bucket.
- Update the bucket to enable cross-origin resource sharing (CORS).
- Create a bucket policy to require users to grant bucket-owner-full-control when uploading objects.
- Create an IAM policy to require users to grant bucket-owner-full-control when uploading objects.
- Create a bucket policy to require users to grant bucket-owner-full-control when uploading objects.
bucket needs permissions = bucket policy
Some edge case scenario: https://aws.amazon.com/it/premiumsupport/knowledge-center/s3-require-object-ownership/
A company is developing a real-time multiplier game that uses UDP for communications between client and servers in an Auto Scaling group. Spikes in demand are anticipated during the day, so the game server platform must adapt accordingly. Developers want to store gamer scores and other non-relational data in a database solution that will scale without intervention.
Which solution should a solutions architect recommend?
- Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.
- Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data storage.
- Use a Network Load Balancer for traffic distribution and Amazon Aurora Global Database for data storage.
- Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for data storage.
- Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data storage.
UDP = layer 3 network = NLB = 2,3
non-relational data = DynamoDB = 2
A company collects temperature, humidity, and atmospheric pressure data in cities across multiple continents. The average volume of data collected per site each day is 500 GB. Each site has a high-speed internet connection. The company’s weather forecasting applications are based in a single Region and analyze the data daily.
What is the FASTEST way to aggregate data from all of these global sites?
- Enable Amazon S3 Transfer Acceleration on the destination bucket. Use multipart uploads to directly upload site data to the destination bucket.
- Upload site data to an Amazon S3 bucket in the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket.
- Schedule AWS Snowball jobs daily to transfer data to the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket.
- Upload the data to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Once a day take an EBS snapshot and copy it to the centralized Region. Restore the EBS volume in the centralized Region and run an analysis on the data daily.
- Enable Amazon S3 Transfer Acceleration on the destination bucket. Use multipart uploads to directly upload site data to the destination bucket.
FASTEST = Transfer Acceleration + multipart uploads
A company has a custom application running on an Amazon EC instance that:
Reads a large amount of data from Amazon S3
• Performs a multi-stage analysis
• Writes the results to Amazon DynamoDB
The application writes a significant number of large, temporary files during the multi-stage analysis. The process performance depends on the temporary storage performance.
What would be the fastest storage option for holding the temporary files?
- Multiple Amazon S3 buckets with Transfer Acceleration for storage.
- Multiple Amazon Elastic Block Store (Amazon EBS) drives with Provisioned IOPS and EBS optimization.
- Multiple Amazon Elastic File System (Amazon EFS) volumes using the Network File System version 4.1 (NFSv4.1) protocol.
- Multiple instance store volumes with software RAID 0.
- Multiple instance store volumes with software RAID 0.
fastest storage option = instance store volumes
A leasing company generates and emails PDF statements every month for all its customers. Each statement is about 400 KB in size. Customers can download their statements from the website for up to 30 days from when the statements were generated. At the end of their 3-year lease, the customers are emailed a ZIP file that contains all the statements.
What is the MOST cost-effective storage solution for this situation?
- Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 1 day.
- Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier Deep Archive storage after 30 days.
- Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) storage after 30 days.
- Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days.
- Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days.
MOST cost-effective storage = S3 Standard-IA + Amazon S3 Glacier
A company recently released a new type of internet-connected sensor. The company is expecting to sell thousands of sensors, which are designed to stream high volumes of data each second to a central location. A solutions architect must design a solution that ingests and stores data so that engineering teams can analyze it in near-real time with millisecond responsiveness.
Which solution should the solutions architect recommend?
- Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift.
- Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB.
- Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift.
- Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB.
- Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift.
stream high volumes of data = Kinesis = 3,4
internet-connected sensor = IoT = millisecond responsiveness= Redshift
A website runs a web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the custom application consistently takes 1 minute to initiate upon boot up before responding to user requests.
How should a solutions architect redesign the architecture to better respond to changing traffic?
- Configure a Network Load Balancer with a slow start configuration.
- Configure AWS ElastiCache for Redis to offload direct requests to the servers.
- Configure an Auto Scaling step scaling policy with an instance warmup condition.
- Configure Amazon CloudFront to use an Application Load Balancer as the origin.
- Configure Amazon CloudFront to use an Application Load Balancer as the origin.
Auto Scaling groups = ALB in front + CloudFront for caching
pictures = easily cached = CloudFront