saa-c02-part-04 Flashcards
A company has a multi-tier application that runs six front-end web servers in an Amazon EC2 Auto Scaling group in a single Availability Zone behind an Application Load Balancer (ALB). A solutions architect needs to modify the infrastructure to be highly available without modifying the application.
Which architecture should the solutions architect choose that provides high availability?
- Create an Auto Scaling group that uses three instances across each of two Regions.
- Modify the Auto Scaling group to use three instances across each of two Availability Zones.
- Create an Auto Scaling template that can be used to quickly create more instances in another Region.
- Change the ALB in front of the Amazon EC2 instances in a round-robin configuration to balance traffic to the web tier.
- Modify the Auto Scaling group to use three instances across each of two Availability Zones
highly available = 2 AZ
6/2 = three
A company runs an application on a group of Amazon Linux EC2 instances. For compliance reasons, the company must retain all application log files for 7 years. The log files will be analyzed by a reporting tool that must access all files concurrently.
Which storage solution meets these requirements MOST cost-effectively?
- Amazon Elastic Block Store (Amazon EBS)
- Amazon Elastic File System (Amazon EFS)
- Amazon EC2 instance store
- Amazon S3
- Amazon S3
A media streaming company collects real-time data and stores it in a disk-optimized database system. The company is not getting the expected throughput and wants an in-memory database storage solution that performs faster and provides high availability using data replication.
Which database should a solutions architect recommend?
- Amazon RDS for MySQL
- Amazon RDS for PostgreSQL.
- Amazon ElastiCache for Redis
- Amazon ElastiCache for Memcached
in-memory DB = ElastiCache for Redis (non-relational)
HA = redis
replication = redis
memcached = not HA
https://aws.amazon.com/elasticache/redis-vs-memcached/
A company hosts its product information webpages on AWS. The existing solution uses multiple Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. The website also uses a custom DNS name and communicates with HTTPS only using a dedicated SSL certificate. The company is planning a new product launch and wants to be sure that users from around the world have the best possible experience on the new website.
What should a solutions architect do to meet these requirements?
- Redesign the application to use Amazon CloudFront.
- Redesign the application to use AWS Elastic Beanstalk.
- Redesign the application to use a Network Load Balancer.
- Redesign the application to use Amazon S3 static website hosting.
- Redesign the application to use Amazon CloudFront.
users from around the world = edge caching = CloudFront
A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while adding and removing application nodes as needed based on the number of jobs to be processed. The processor application is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored.
Which design should the solutions architect use?
- Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage.
- Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage.
- Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.
- Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic.
- Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.
loosely coupled = SQS queue needed
launch templates are better than launch configurations because they support versioning
A marketing company is storing CSV files in an Amazon S3 bucket for statistical analysis. An application on an Amazon EC2 instance needs permission to efficiently process the CSV data stored in the S3 bucket.
Which action will MOST securely grant the EC2 instance access to the S3 bucket?
- Attach a resource-based policy to the S3 bucket.
- Create an IAM user for the application with specific permissions to the S3 bucket.
- Associate an IAM role with least privilege permissions to the EC2 instance profile.
- Store AWS credentials directly on the EC2 instance for applications on the instance to use for API calls.
- Associate an IAM role with least privilege permissions to the EC2 instance profile.
instance needs permission = IAM role (always look for “role” answers)
least privilege
https://aws.amazon.com/ko/premiumsupport/knowledge-center/ec2-instance-access-s3-bucket/
A company has on-premises servers running a relational database. The current database serves high read traffic for users in different locations. The company wants to migrate to AWS with the least amount of effort. The database solution should support disaster recovery and not affect the company’s current traffic flow.
Which solution meets these requirements?
- Use a database in Amazon RDS with Multi-AZ and at least one read replica.
- Use a database in Amazon RDS with Multi-AZ and at least one standby replica.
- Use databases hosted on multiple Amazon EC2 instances in different AWS Regions.
- Use databases hosted on Amazon EC2 instances behind an Application Load Balancer in different Availability Zones.
- Use a database in Amazon RDS with Multi-AZ and at least one read replica.
relational database = RDS
high read traffic = read replicas
disaster recovery = Multi-AZ
A company’s application is running on Amazon EC2 instances within an Auto Scaling group behind an Elastic Load Balancer. Based on the application’s history the company anticipates a spike in traffic during a holiday each year. A solutions architect must design a strategy to ensure that the Auto Scaling group proactively increases capacity to minimize any performance impact on application users.
Which solution will meet these requirements?
- Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds 90%.
- Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand.
- Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak demand period.
- Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when there are autoscaling EC2_INSTANCE_LAUNCH events.
- Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand.
during a holiday each year = predictable = scheduled
A company hosts an application on multiple Amazon EC2 instances. The application processes messages from an Amazon SQS queue, writes for an Amazon RDS table, and deletes the message from the queue. Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?
- Use the CreateQueue API call to create a new queue.
- Use the AddPermission API call to add appropriate permissions.
- Use the ReceiveMessage API call to set an appropriate wait time.
- Use the ChangeMessageVisibility API call to increase the visibility timeout.
- Use the ChangeMessageVisibility API call to increase the visibility timeout.
duplicate records = multiple subscribers see the message = visibility timeout (not long enough for processing)
The problem here is multiple EC2 instances are picking up the SAME message, processing them, and writing the results into RDS - This is caused by the visibility timeout being shorter than the processing time, resulting in timeout expiring BEFORE the EC2 instances can finish processing and DELETE the message from the queue
An Amazon EC2 administrator created the following policy associated with an IAM group containing several users:
What is the effect of this policy?
- Users can terminate an EC2 instance in any AWS Region except us-east-1.
- Users can terminate an EC2 instance with the IP address 10.100.100.1 in the us-east-1 Region.
- Users can terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.100.100.254.
- Users cannot terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.100.100.254.
- Users can terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.100.100.254.
1 is wrong because the deny
2 is wrong because the allow is for source IP not instance IP
4 is wrong because the words cannot terminate
A solutions architect is optimizing a website for an upcoming musical event. Videos of the performances will be streamed in real time and then will be available on demand. The event is expected to attract a global online audience.
Which service will improve the performance of both the real-time and on-demand steaming?
- Amazon CloudFront
- AWS Global Accelerator
- Amazon Route S3
- Amazon S3 Transfer Acceleration
- Amazon CloudFront
Global online audience = performance + caching = CloudFront
A company has a three-tier image-sharing application. It uses an Amazon EC2 instance for the front-end layer, another for the backend tier, and a third for the MySQL database. A solutions architect has been tasked with designing a solution that is highly available, and requires the least amount of changes to the application
Which solution meets these requirements?
- Use Amazon S3 to host the front-end layer and AWS Lambda functions for the backend layer. Move the database to an Amazon DynamoDB table and use Amazon S3 to store and serve users’ images.
- Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with multiple read replicas to store and serve users’ images.
- Use Amazon S3 to host the front-end layer and a fleet of Amazon EC2 instances in an Auto Scaling group for the backend layer. Move the database to a memory optimized instance type to store and serve users’ images.
- Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with a Multi-AZ deployment. Use Amazon S3 to store and serve users’ images.
- Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with a Multi-AZ deployment. Use Amazon S3 to store and serve users’ images.
MySQL = RDS = 1 DynamoDB invalid
3 Move the database to a memory optimized instance = a lot of changes = invalid
only 2 and 4 left, highly available = RDS instance with a Multi-AZ and 4 wins
A solutions architect is designing a system to analyze the performance of financial markets while the markets are closed. The system will run a series of compute-intensive jobs for 4 hours every night. The time to complete the compute jobs is expected to remain constant, and jobs cannot be interrupted once started. Once completed, the system is expected to run for a minimum of 1 year.
Which type of Amazon EC2 instances should be used to reduce the cost of the system?
- Spot Instances
- On-Demand Instances
- Standard Reserved Instances
- Scheduled Reserved Instances
- Scheduled Reserved Instances
4 hours every night = scheduled
Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-scheduled-instances.html
A company built a food ordering application that captures user data and stores it for future analysis. The application’s static front end is deployed on an Amazon EC2 instance. The front-end application sends the requests to the backend application running on separate EC2 instance. The backend application then stores the data in Amazon RDS.
What should a solutions architect do to decouple the architecture and make it scalable?
- Use Amazon S3 to serve the front-end application, which sends requests to Amazon EC2 to execute the backend application. The backend application will process and store the data in Amazon RDS.
- Use Amazon S3 to serve the front-end application and write requests to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe Amazon EC2 instances to the HTTP/HTTPS endpoint of the topic, and process and store the data in Amazon RDS.
- Use an EC2 instance to serve the front end and write requests to an Amazon SQS queue. Place the backend instance in an Auto Scaling group, and scale based on the queue depth to process and store the data in Amazon RDS.
- Use Amazon S3 to serve the static front-end application and send requests to Amazon API Gateway, which writes the requests to an Amazon SQS queue. Place the backend instances in an Auto Scaling group, and scale based on the queue depth to process and store the data in Amazon RDS.
- Use Amazon S3 to serve the static front-end application and send requests to Amazon API Gateway, which writes the requests to an Amazon SQS queue. Place the backend instances in an Auto Scaling group, and scale based on the queue depth to process and store the data in Amazon RDS.
static front end = S3 = 1,2,4
decouple = SQS = 2,4
scalable = ASG = 4
A solutions architect needs to design a managed storage solution for a company’s application that includes high-performance machine learning. This application runs on AWS Fargate, and the connected storage needs to have concurrent access to files and deliver high performance.
Which storage option should the solutions architect recommend?
- Create an Amazon S3 bucket for the application and establish an IAM role for Fargate to communicate with Amazon S3.
- Create an Amazon FSx for Lustre file share and establish an IAM role that allows Fargate to communicate with FSx for Lustre.
- Create an Amazon Elastic File System (Amazon EFS) file share and establish an IAM role that allows Fargate to communicate with Amazon Elastic File System (Amazon EFS).
- Create an Amazon Elastic Block Store (Amazon EBS) volume for the application and establish an IAM role that allows Fargate to communicate with Amazon Elastic Block Store (Amazon EBS).
- Create an Amazon Elastic File System (Amazon EFS) file share and establish an IAM role that allows Fargate to communicate with Amazon Elastic File System (Amazon EFS).
AWS Fargate = doesnt work with Lustre (https://docs.aws.amazon.com/fsx/latest/LustreGuide/mounting-ecs.html)
machine learning + high-performance + concurrent access = EFS