Q001-050 Flashcards

1
Q

A solutions architect is designing a solution where users will be directed to a backup static error page if the primary website is unavailable. The primary website’s DNS records are hosted in Amazon Route 53 where their domain is pointing to an Application Load Balancer (ALB). Which configuration should the solutions architect use to meet the company’s needs while minimizing changes and infrastructure overhead?

A. Point a Route 53 alias record to an Amazon CloudFront distribution with the ALB as one of its origins. Then, create custom error pages for the distribution.

B. Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page hosted within an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.

C. Update the Route 53 record to use a latency-based routing policy. Add the backup static error page hosted within an Amazon S3 bucket to the record so the traffic is sent to the most responsive endpoints.

D. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance hosting a static error page as endpoints. Route 53 will only send requests to the instance if the health checks fail for the ALB.

A

A. CloudFront Distribution with Custom Error Pages: While this is a viable solution for high availability and performance enhancement through CDN, it adds more complexity and isn’t necessary just for redirecting users to a static error page in case of primary site failure.

C. Latency-Based Routing Policy: This policy is used to route traffic based on the lowest network latency for your end user. It’s not suitable for a failover scenario where the primary concern is the availability of the primary website.

D. Active-Active Configuration with EC2 Instance: An active-active configuration with an EC2 instance as a backup for a static error page is overkill in terms of cost and management. Using an EC2 instance for a static error page is not cost-effective compared to using S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A solutions architect is designing a high performance computing (HPC) workload on Amazon EC2. The EC2 instances need to communicate to each other frequently and require network performance with low latency and high throughput.
Which EC2 configuration meets these requirements?

A. Launch the EC2 instances in a cluster placement group in one Availability Zone.

B. Launch the EC2 instances in a spread placement group in one Availability Zone.

C. Launch the EC2 instances in an Auto Scaling group in two Regions and peer the VPCs.

D. Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones.

A

B. Spread Placement Group: A spread placement group ensures that instances are placed on distinct underlying hardware and are suitable for applications that need high availability, but they do not specifically offer low-latency, high-throughput networking.

C. Auto Scaling Group in Two Regions and VPC Peering: Using multiple Regions will significantly increase the latency due to geographical distance, which is not suitable for HPC workloads requiring fast inter-node communication.

D. Auto Scaling Group Spanning Multiple Availability Zones: While this offers high availability, the increased latency between Availability Zones makes it less suitable for HPC workloads that require low latency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company wants to host a scalable web application on AWS. The application will be accessed by users from different geographic regions of the world.
Application users will be able to download and upload unique data up to gigabytes in size. The development team wants a cost-effective solution to minimize upload and download latency and maximize performance.
What should a solutions architect do to accomplish this?

A. Use Amazon S3 with Transfer Acceleration to host the application.

B. Use Amazon S3 with CacheControl headers to host the application.

C. Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.

D. Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.

A

A. Amazon S3 with Transfer Acceleration: While S3 with Transfer Acceleration speeds up the transfer of files over long distances between the client and an S3 bucket, it primarily optimizes the transfer to the bucket and doesn’t address the scalability and performance of the web application itself.

B. Amazon S3 with CacheControl Headers: While CacheControl headers can help with caching static content, they don’t provide the same level of global content delivery optimization as CloudFront. Also, this doesn’t address the scalable hosting of the application.

D. Amazon EC2 with Auto Scaling and Amazon ElastiCache: ElastiCache is mainly used for caching frequently accessed data to improve read performance, not for optimizing large file transfers across geographical regions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company is migrating from an on-premises infrastructure to the AWS Cloud. One of the company’s applications stores files on a Windows file server farm that uses Distributed File System Replication (DFSR) to keep data in sync. A solutions architect needs to replace the file server farm.
Which service should the solutions architect use?

A. Amazon EFS

B. Amazon FSx

C. Amazon S3

D. AWS Storage Gateway

A

A. Amazon EFS: Amazon Elastic File System (EFS) is primarily designed for Linux-based applications and doesn’t support Windows file system features like DFSR. It’s not suitable for applications that are tightly integrated with Windows file system services.

C. Amazon S3: While Amazon Simple Storage Service (S3) is a highly scalable object storage service, it’s not a file system and doesn’t provide the file system interface or features (like DFSR) required by the existing Windows-based application.

D. AWS Storage Gateway: Storage Gateway connects on-premises environments with cloud-based storage. It’s more of a data migration and hybrid storage solution rather than a direct replacement for a Windows file server farm. It doesn’t inherently provide a managed Windows file system with DFSR support.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company has a legacy application that processes data in two parts. The second part of the process takes longer than the first, so the company has decided to rewrite the application as two microservices running on Amazon ECS that can scale independently. How should a solutions architect integrate the microservices?

A. Implement code in microservice 1 to send data to an Amazon S3 bucket. Use S3 event notifications to invoke microservice 2.

B. Implement code in microservice 1 to publish data to an Amazon SNS topic. Implement code in microservice 2 to subscribe to this topic.

C. Implement code in microservice 1 to send data to Amazon Kinesis Data Firehose. Implement code in microservice 2 to read from Kinesis Data Firehose.

D. Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue.

A

A. Amazon S3 with Event Notifications: While using S3 and event notifications is a valid approach for triggering processes, it’s more suited for scenarios involving file storage and changes. It’s not as efficient for continuous inter-service communication, especially for applications that require more immediate processing of individual messages or data points.

B. Amazon SNS Topic: Amazon Simple Notification Service (SNS) is useful for pub/sub scenarios and fan-out messaging patterns. However, it doesn’t inherently provide the queue-based workload management that SQS offers, which is beneficial in handling varying processing times between microservices.

C. Amazon Kinesis Data Firehose: Kinesis is designed for real-time streaming data and is more complex and costlier for simple inter-service communication. It’s overkill for most microservice architectures unless there’s a specific need for streaming processing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company captures clickstream data from multiple websites and analyzes it using batch processing. The data is loaded nightly into Amazon Redshift and is consumed by business analysts. The company wants to move towards near-real-time data processing for timely insights. The solution should process the streaming data with minimal effort and operational overhead.
Which combination of AWS services are MOST cost-effective for this solution? (Choose two.)

A. Amazon EC2

B. AWS Lambda

C. Amazon Kinesis Data Streams

D. Amazon Kinesis Data Firehose

E. Amazon Kinesis Data Analytics

A

A. Amazon EC2: While EC2 offers flexibility and control, it requires significant management and operational overhead for scaling, monitoring, and maintaining servers, which is contrary to the requirement of minimal effort.

B. AWS Lambda: Lambda is useful for running code in response to events, but in this scenario, managing the flow and processing of streaming data is more efficiently and cost-effectively handled by Kinesis services.

E. Amazon Kinesis Data Analytics: Although this is a powerful tool for analyzing streaming data using SQL or Apache Flink, the primary need here is the real-time collection and delivery of data. The analysis part is already handled by business analysts using Amazon Redshift.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company’s application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. On the first day of every month at midnight, the application becomes much slower when the month-end financial calculation batch executes. This causes the CPU utilization of the EC2 instances to immediately peak to 100%, which disrupts the application.
What should a solutions architect recommend to ensure the application is able to handle the workload and avoid downtime?

A. Configure an Amazon CloudFront distribution in front of the ALB.

B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization.

C. Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.

D. Configure Amazon ElastiCache to remove some of the workload from the EC2 instances.

A

A. Amazon CloudFront Distribution in Front of the ALB: CloudFront is a Content Delivery Network (CDN) primarily used to cache and deliver static and dynamic content at edge locations. While it can reduce the load on the servers by caching content, it wouldn’t be effective in this scenario, as the issue is related to CPU-intensive batch processing, not content delivery.

B. Auto Scaling Simple Scaling Policy Based on CPU Utilization: A simple scaling policy that triggers based on CPU utilization would reactively add more instances after the CPU utilization spikes. This reactive approach might not scale up the infrastructure quickly enough to handle the sudden increase in load, leading to potential performance issues.

D. Amazon ElastiCache: While ElastiCache can improve application performance by caching frequently accessed data, the problem in this scenario is related to CPU-intensive processing tasks. Unless the performance issue is due to database load that can be alleviated by caching, ElastiCache may not address the core issue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company runs a multi-tier web application that hosts news content. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones and use an Amazon Aurora database. A solutions architect needs to make the application more resilient to periodic increases in request rates.
Which architecture should the solutions architect implement? (Choose two.)

A. Add AWS Shield.

B. Add Aurora Replica.

C. Add AWS Direct Connect.

D. Add AWS Global Accelerator.

E. Add an Amazon CloudFront distribution in front of the Application Load Balancer.

A

A. AWS Shield: AWS Shield provides protection against DDoS attacks. While it’s important for overall security, it doesn’t specifically address the issue of scaling to handle increased request rates.

C. AWS Direct Connect: Direct Connect provides a dedicated network connection from on-premises to AWS. It’s more about network consistency and reduced bandwidth costs than about scaling an application to handle increased traffic.

D. AWS Global Accelerator: Global Accelerator improves application availability and performance by directing user traffic to optimal endpoints. While it can enhance performance, it’s not as directly impactful as CloudFront for content delivery and Aurora Replicas for database scaling in this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

An application running on AWS uses an Amazon Aurora Multi-AZ deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database.
What should the solutions architect do to separate the read requests from the write requests?

A. Enable read-through caching on the Amazon Aurora database.

B. Update the application to read from the Multi-AZ standby instance.

C. Create a read replica and modify the application to use the appropriate endpoint.

D. Create a second Amazon Aurora database and link it to the primary database as a read replica.

A

A. Enable read-through caching on the Amazon Aurora database: While caching can improve read performance, it does not fundamentally address the issue of separating read and write requests. High read volume can still impact the overall performance of the primary database.

B. Update the application to read from the Multi-AZ standby instance: In Aurora Multi-AZ deployments, the standby instance is not designed for scaling read operations. It primarily serves as a failover target to ensure high availability. Using it for reads would not be an effective solution and is not a recommended practice.

D. Create a second Amazon Aurora database and link it to the primary database as a read replica: Creating a completely separate Aurora database and linking it as a read replica introduces unnecessary complexity and potential synchronization challenges. It’s more efficient to use Aurora’s built-in read replica feature.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A recently acquired company is required to build its own infrastructure on AWS and migrate multiple applications to the cloud within a month. Each application has approximately 50 TB of data to be transferred. After the migration is complete, this company and its parent company will both require secure network connectivity with consistent throughput from their data centers to the applications. A solutions architect must ensure one-time data migration and ongoing network connectivity.
Which solution will meet these requirements?

A. AWS Direct Connect for both the initial transfer and ongoing connectivity.

B. AWS Site-to-Site VPN for both the initial transfer and ongoing connectivity.

C. AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity.

D. AWS Snowball for the initial transfer and AWS Site-to-Site VPN for ongoing connectivity.

A

A. AWS Direct Connect for Both Transfers and Connectivity: While Direct Connect provides a high-speed, dedicated network connection, using it for the initial transfer of 50 TB per application might not be time-efficient, especially within a short timeframe like a month.

B. AWS Site-to-Site VPN for Both Transfers and Connectivity: A Site-to-Site VPN would provide secure connectivity over the internet but may not offer the same level of throughput and performance consistency as Direct Connect. Also, transferring large amounts of data over a VPN might be too slow for the initial migration.

D. AWS Snowball for Initial Transfer and AWS Site-to-Site VPN for Ongoing Connectivity: While Snowball is a good choice for the initial transfer, relying on a Site-to-Site VPN for ongoing connectivity might not meet the need for consistent, high-throughput connectivity, especially for applications that require frequent, large-scale data transfers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company serves content to its subscribers across the world using an application running on AWS. The application has several Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB). Due to a recent change in copyright restrictions, the chief information officer (CIO) wants to block access for certain countries.
Which action will meet these requirements?

A. Modify the ALB security group to deny incoming traffic from blocked countries.

B. Modify the security group for EC2 instances to deny incoming traffic from blocked countries.

C. Use Amazon CloudFront to serve the application and deny access to blocked countries.

D. Use ALB listener rules to return access denied responses to incoming traffic from blocked countries.

A

A. Modify the ALB security group to deny incoming traffic from blocked countries: Security groups in AWS are associated with instances and provide stateful filtering of ingress/egress network traffic to instances. However, they don’t inherently have the capability to filter traffic based on geographic location. They work with IP addresses and IP ranges, but maintaining a list of IP ranges for each country is impractical and error-prone, as these can change frequently.

B. Modify the security group for EC2 instances to deny incoming traffic from blocked countries: This option has the same limitations as option A. Security groups do not natively support geolocation-based filtering. Moreover, directly exposing EC2 instances to internet traffic (even when filtered) is generally not a best practice in terms of security.

D. Use ALB listener rules to return access denied responses to incoming traffic from blocked countries: While ALB listener rules allow for routing decisions based on the content of the request (like headers and request paths), they do not support geolocation-based routing decisions. Therefore, this method is not feasible for blocking access based on the user’s country.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company is creating a new application that will store a large amount of data. The data will be analyzed hourly and modified by several Amazon EC2 Linux instances that are deployed across multiple Availability Zones. The application team believes the amount of space needed will continue to grow for the next 6 months. Which set of actions should a solutions architect take to support these needs?

A. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the application instances.

B. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the application instances.

C. Store the data in Amazon S3 Glacier. Update the S3 Glacier vault policy to allow access to the application instances.

D. Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume shared between the application instances.

A

A. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the application instances: EBS volumes are great for single-instance storage with high performance, but they are inherently limited to being attached to one EC2 instance at a time (except for multi-attach enabled volumes, which have their own limitations). This would not work efficiently for multiple EC2 instances across multiple Availability Zones as required in the question.

C. Store the data in Amazon S3 Glacier: Amazon S3 Glacier is a low-cost storage service designed for data archiving and long-term backup. It is not suitable for scenarios where data needs to be accessed and modified frequently, as it has retrieval times ranging from minutes to hours. This makes it inappropriate for the needs of an application that requires hourly analysis and modification of data.

D. Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume shared between the application instances: While EBS Provisioned IOPS volumes offer high performance for I/O-intensive workloads, they cannot be natively shared across multiple EC2 instances in different Availability Zones. The requirement for multiple instances to modify data across Availability Zones makes this option unsuitable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company is migrating a three-tier application to AWS. The application requires a MySQL database. In the past, the application users reported poor application performance when creating new entries. These performance issues were caused by users generating different real-time reports from the application during working hours. Which solution will improve the performance of the application when it is moved to AWS?

A. Import the data into an Amazon DynamoDB table with provisioned capacity. Refactor the application to use DynamoDB for reports.

B. Create the database on a compute optimized Amazon EC2 instance. Ensure compute resources exceed the on-premises database.

C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the application to use the reader endpoint for reports.

D. Create an Amazon Aurora MySQL Multi-AZ DB cluster. Configure the application to use the backup instance of the cluster as an endpoint for the reports.

A

A. While DynamoDB offers high performance at scale, it is a NoSQL database service, which differs significantly from a relational database like MySQL. Refactoring the application to use DynamoDB could be resource-intensive and may not be necessary if the only issue is performance during reporting. Additionally, DynamoDB’s data model and query capabilities are different from MySQL, which could lead to significant changes in how the application handles data.

B. This approach might improve performance compared to the on-premises setup. However, managing a database on EC2 instances requires handling many aspects like backups, failover, patching, and scalability manually. This option doesn’t provide the best scalability and high availability compared to managed database services.

D. This option is not feasible because the backup instance in a Multi-AZ deployment is not designed for direct querying or load balancing. It is a standby replica used for failover purposes and is not accessible for read queries under normal operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A solutions architect is deploying a distributed database on multiple Amazon EC2 instances. The database stores all data on multiple instances so it can withstand the loss of an instance. The database requires block storage with latency and throughput to support several million transactions per second per server.
Which storage solution should the solutions architect use?

A. EBS Amazon Elastic Block Store (Amazon EBS)

B. Amazon EC2 instance store

C. Amazon Elastic File System (Amazon EFS)

D. Amazon S3

A

A. EBS Amazon Elastic Block Store (Amazon EBS): EBS provides high performance block storage service suitable for both throughput and latency-sensitive transactions. However, even though EBS volumes like Provisioned IOPS SSD (io2) can offer high performance, they might not be able to meet the extreme performance requirement of several million transactions per second per server, especially considering the network latency involved in accessing EBS volumes, which are network-attached storage.

C. Amazon Elastic File System (Amazon EFS): EFS provides scalable file storage for use with AWS Cloud services and on-premises resources. While it’s good for many use cases, it is not optimized for the extremely high transaction rates described in the scenario. EFS is more suitable for use cases where shared file storage is needed.

D. Amazon S3: Amazon S3 is an object storage service and not suitable for database storage requiring block-level storage and high transaction rates. It is designed for durability, storing large amounts of data, and easy access, not for high transactional workloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Organizers for a global event want to put daily reports online as static HTML pages. The pages are expected to generate millions of views from users around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution.
Which action should the solutions architect take to accomplish this?

A. Generate presigned URLs for the files.

B. Use cross-Region replication to all Regions.

C. Use the geoproximity feature of Amazon Route 53.

D. Use Amazon CloudFront with the S3 bucket as its origin.

A

A. Generate presigned URLs for the files: Presigned URLs are typically used to securely share private files from S3 buckets for a limited time. This approach isn’t suitable for public access to static content like HTML pages intended for millions of users, as it would require generating and managing a large number of temporary URLs.

B. Use cross-Region replication to all Regions: Cross-Region replication involves replicating data across different AWS Regions. While this can enhance data availability and durability, it’s not the most efficient way to distribute static content globally. It would require complex management and incur additional costs without providing the latency and performance benefits of a content delivery network (CDN).

C. Use the geoproximity feature of Amazon Route 53: Route 53’s geoproximity routing lets you choose where traffic will be sent based on the geographic location of your users and your resources. However, this is more about routing traffic to different endpoints, rather than efficiently serving static content. It does not provide the caching and global distribution benefits of a CDN.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A solutions architect is designing a new service behind Amazon API Gateway. The request patterns for the service will be unpredictable and can change suddenly from 0 requests to over 500 per second. The total size of the data that needs to be persisted in a backend database is currently less than 1 GB with unpredictable future growth. Data can be queried using simple key-value requests. Which combination of AWS services would meet these requirements? (Choose two.)

A. AWS Fargate

B. AWS Lambda

C. Amazon DynamoDB

D. Amazon EC2 Auto Scaling

E. MySQL-compatible Amazon Aurora

A

A. AWS Fargate: While Fargate is a serverless compute engine for containers, it’s more suitable for application scenarios where you need more control over the environment and dependencies. It’s not as straightforward and efficient as Lambda for unpredictable, bursty traffic patterns.

D. Amazon EC2 Auto Scaling: EC2 instances with Auto Scaling can handle varying load by adjusting the number of EC2 instances. However, this approach requires more management and isn’t as efficient in scaling rapidly to sudden spikes in traffic compared to AWS Lambda.

E. MySQL-compatible Amazon Aurora: Although Aurora provides high performance and scalability, it is a relational database service, which might be an overkill for simple key-value data storage needs. Also, managing a relational database could be more complex and less cost-effective for this use case compared to DynamoDB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A start-up company has a web application based in the us-east-1 Region with multiple Amazon EC2 instances running behind an Application Load Balancer across multiple Availability Zones. As the company’s user base grows in the us-west-1 Region, it needs a solution with low latency and high availability.
What should a solutions architect do to accomplish this?

A. Provision EC2 instances in us-west-1. Switch the Application Load Balancer to a Network Load Balancer to achieve cross-Region load balancing.

B. Provision EC2 instances and an Application Load Balancer in us-west-1. Make the load balancer distribute the traffic based on the location of the request.

C. Provision EC2 instances and configure an Application Load Balancer in us-west-1. Create an accelerator in AWS Global Accelerator that uses an endpoint group that includes the load balancer endpoints in both Regions.

D. Provision EC2 instances and configure an Application Load Balancer in us-west-1. Configure Amazon Route 53 with a weighted routing policy. Create alias records in Route 53 that point to the Application Load Balancer.

A

A. While this approach does provision resources in us-west-1 to reduce latency, AWS Network Load Balancers (NLB) do not support cross-Region load balancing. NLBs operate within a single region, so this approach would not provide the desired outcome of distributing traffic across regions.

B. While provisioning EC2 instances and an ALB in us-west-1 is a good step, ALBs do not inherently distribute traffic based on the geographic location of the request. They route traffic within a single region and do not have built-in capabilities for global traffic distribution based on location.

D. This option involves setting up the infrastructure in us-west-1 and using Route 53 for DNS routing. However, a weighted routing policy doesn’t inherently consider the geographic location of the user for routing decisions. It’s more about distributing traffic between different resources based on assigned weights, and not necessarily for latency optimization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A solutions architect is designing a solution to access a catalog of images and provide users with the ability to submit requests to customize images. Image customization parameters will be in any request sent to an AWS API Gateway. The customized image will be generated on demand, and users will receive a link they can click to view or download their customized image. The solution must be highly available for viewing and customizing images. What is the MOST cost-effective solution to meet these requirements?

A. Use Amazon EC2 instances to manipulate the original image into the requested customizations. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances.

B. Use AWS Lambda to manipulate the original image to the requested customizations. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin.

C. Use AWS Lambda to manipulate the original image to the requested customizations. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances.

D. Use Amazon EC2 instances to manipulate the original image into the requested customizations. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin.

A

A. Amazon EC2 Instances for Image Manipulation: While EC2 instances can be used for image manipulation, they are generally more expensive and require more management compared to AWS Lambda. You need to manage scaling, ensure high availability, and you pay for continuous running of instances, even if there’s no demand.

C. Storing Manipulated Images in Amazon DynamoDB: DynamoDB is a NoSQL database service, not typically used for storing images. Storing large objects like images in DynamoDB is not cost-effective and not aligned with its intended use case.

D. EC2 Instances and DynamoDB Storage: This option combines the less desirable elements of A and C – using EC2 for image manipulation and DynamoDB for storing images, which would not be as cost-effective or as efficient as using Lambda and S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A company is planning to migrate a business-critical dataset to Amazon S3. The current solution design uses a single S3 bucket in the us-east-1 Region with versioning enabled to store the dataset. The company’s disaster recovery policy states that all data multiple AWS Regions.
How should a solutions architect design the S3 solution?

A. Create an additional S3 bucket in another Region and configure cross-Region replication.

B. Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS).

C. Create an additional S3 bucket with versioning in another Region and configure cross-Region replication.

D. Create an additional S3 bucket with versioning in another Region and configure cross-origin resource (CORS).

A

A. Create an additional S3 bucket in another Region and configure cross-Region replication: While this option does create a bucket in another region and sets up cross-region replication, it doesn’t mention enabling versioning. Versioning is an important feature for maintaining the integrity of the data, especially in a business-critical dataset. It keeps multiple versions of an object in one bucket, which is useful for recovery in case of accidental deletion or overwriting.

B. Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS): CORS is a mechanism that allows many resources (e.g., fonts, JavaScript, etc.) on a web page to be requested from another domain outside the domain from which the resource originated. This option is not relevant to the requirement of replicating data for disaster recovery purposes.

D. Create an additional S3 bucket with versioning in another Region and configure cross-origin resource sharing (CORS): Similar to option B, CORS is not relevant for data replication and disaster recovery. While this option includes versioning, which is good, it does not mention cross-Region replication, which is key to fulfilling the disaster recovery requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A company has application running on Amazon EC2 instances in a VPC. One of the applications needs to call an Amazon S3 API to store and read objects. The company’s security policies restrict any internet-bound traffic from the applications.
Which action will fulfill these requirements and maintain security?

A. Configure an S3 interface endpoint.

B. Configure an S3 gateway endpoint.

C. Create an S3 bucket in a private subnet.

D. Create an S3 bucket in the same Region as the EC2 instance.

A

A. Configure an S3 interface endpoint: An interface endpoint (powered by AWS PrivateLink) enables you to connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. However, for S3, a gateway endpoint is a more efficient and cost-effective solution compared to an interface endpoint.

C. Create an S3 bucket in a private subnet: Amazon S3 buckets are not created within a VPC or its subnets. S3 is a global service, and its buckets are not confined to VPCs or subnets. Therefore, this option is not applicable or possible.

D. Create an S3 bucket in the same Region as the EC2 instance: While it’s generally a good practice to create an S3 bucket in the same region as the EC2 instances for latency and cost considerations, merely creating a bucket in the same region does not address the security requirement of restricting internet-bound traffic. A VPC endpoint is needed for private connectivity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A company’s web application uses an Amazon RDS PostgreSQL DB instance to store its application data. During the financial closing period at the start of every month, Accountants run large queries that impact the database’s performance due to high usage. The company wants to minimize the impact that the reporting activity has on the web application.
What should a solutions architect do to reduce the impact on the database with the LEAST amount of effort?

A. Create a read replica and direct reporting traffic to the replica.

B. Create a Multi-AZ database and direct reporting traffic to the standby.

C. Create a cross-Region read replica and direct reporting traffic to the replica.

D. Create an Amazon Redshift database and direct reporting traffic to the Amazon Redshift database.

A

B. Create a Multi-AZ database and direct reporting traffic to the standby: Multi-AZ deployments in RDS are designed for high availability and failover support, not for load balancing or offloading read queries. The standby instance in a Multi-AZ deployment is not used for read operations under normal circumstances; it is maintained as a synchronized copy of the primary instance and used only in case of failover.

C. Create a cross-Region read replica and direct reporting traffic to the replica: While cross-Region read replicas serve a similar purpose to regular read replicas, they are typically used for enhancing disaster recovery and data locality. For the scenario described, a cross-Region replica would be an overkill and might introduce unnecessary latency if the accountants are not located in the region where the replica is created.

D. Create an Amazon Redshift database and direct reporting traffic to the Amazon Redshift database: While Amazon Redshift is a powerful data warehousing service that can handle large-scale reporting and analysis, migrating to Redshift would require significant effort. You would need to extract, transform, and load (ETL) data from your RDS instance to Redshift, which is more complex and time-consuming compared to setting up a read replica.

22
Q

A company wants to migrate a high performance computing (HPC) application and data from on-premises to the AWS Cloud. The company uses tiered storage on premises with hot high-performance parallel storage to support the application during periodic runs of the application, and more economical cold storage to hold the data when the application is not actively running.
Which combination of solutions should a solutions architect recommend to support the storage needs of the application? (Choose two.)

A. Amazon S3 for cold data storage

B. Amazon Elastic File System (Amazon EFS) for cold data storage

C. Amazon S3 for high-performance parallel storage

D. Amazon FSx for Lustre for high-performance parallel storage

E. Amazon FSx for Windows for high-performance parallel storage

A

B. Amazon Elastic File System (Amazon EFS) for cold data storage: While EFS is a fully managed service and a good option for file storage, it is not typically categorized as a ‘cold storage’ solution. EFS is more suited for use cases requiring shared file storage across multiple EC2 instances.

C. Amazon S3 for high-performance parallel storage: Amazon S3 is not designed for high-performance parallel file systems required in HPC workloads. S3 is an object storage service that excels in scalability and durability for data storage but doesn’t offer the kind of file system performance and parallel data processing capabilities that FSx for Lustre does.

E. Amazon FSx for Windows for high-performance parallel storage: Amazon FSx for Windows File Server provides fully managed Microsoft Windows file servers. It is not designed for HPC workloads that typically require high-performance parallel file systems, making it less suited for this scenario compared to FSx for Lustre.

23
Q

A company’s application is running on Amazon EC2 instances in a single Region. In the event of a disaster, a solutions architect needs to ensure that the resources can also be deployed to a second Region.
Which combination of actions should the solutions architect take to accomplish this? (Choose two.)

A. Detach a volume on an EC2 instance and copy it to Amazon S3.

B. Launch a new EC2 instance from an Amazon Machine Image (AMI) in a new Region.

C. Launch a new EC2 instance in a new Region and copy a volume from Amazon S3 to the new instance.

D. Copy an Amazon Machine Image (AMI) of an EC2 instance and specify a different Region for the destination.

E. Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the destination Region using that EBS volume.

A

A. Detach a volume on an EC2 instance and copy it to Amazon S3: This is not practical because EBS volumes (used with EC2) cannot be directly copied to S3. This process also does not aid in quick disaster recovery in another region.

C. Launch a new EC2 instance in a new Region and copy a volume from Amazon S3 to the new instance: This assumes that the volume’s data is already on S3 and can be transferred to a new instance, which is not a typical or straightforward disaster recovery method. It’s also more complex and time-consuming compared to using AMIs.

E. Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the destination Region using that EBS volume: This option is not feasible because EBS volumes are not stored in S3 and cannot be directly copied or moved across regions in this way.

24
Q

A manufacturing company wants to implement predictive maintenance on its machinery equipment. The company will install thousands of IoT sensors that will send data to AWS in real time. A solutions architect is tasked with implementing a solution that will receive events in an ordered manner for each machinery asset and ensure that data is saved for further processing at a later time.
Which solution would be MOST efficient?

A. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.

B. Use Amazon Kinesis Data Streams for real-time events with a shard for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon Elastic Block Store (Amazon EBS).

C. Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function for the SQS queue to save data to Amazon Elastic File System (Amazon EFS).

D. Use an Amazon SQS standard queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3.

A

B. Using Amazon EBS with Kinesis Firehose: Saving streaming data directly to Amazon EBS is not a typical use case. EBS is block storage primarily used for persistent storage for EC2 instances, not for big data storage solutions.

C. Amazon SQS FIFO Queue with Amazon EFS: While SQS FIFO queues maintain the order of messages, they are not designed for high-throughput real-time streaming data from thousands of IoT sensors. Additionally, saving data to Amazon EFS would not be as efficient as using S3 for big data storage and analysis.

D. Amazon SQS Standard Queue with Amazon S3: SQS standard queues do not guarantee the order of messages, which is a key requirement in this scenario. Although using S3 for storage is appropriate, the lack of ordering in the queue makes this option less suitable.

24
Q

A solutions architect needs to ensure that API calls to Amazon DynamoDB from Amazon EC2 instances in a VPC do not traverse the internet.
What should the solutions architect do to accomplish this? (Choose two.)

A. Create a route table entry for the endpoint.

B. Create a gateway endpoint for DynamoDB.

C. Create a new DynamoDB table that uses the endpoint.

D. Create an ENI for the endpoint in each of the subnets of the VPC.

E. Create a security group entry in the default security group to provide access.

A

C: Creating a new DynamoDB table does not influence the network path used for accessing DynamoDB. The network path is determined by the VPC configuration and the presence of a VPC endpoint for DynamoDB, not by how individual DynamoDB tables are created.

D: This is not necessary for gateway endpoints like the one used for DynamoDB. Gateway endpoints do not require an Elastic Network Interface (ENI) to be set up in each subnet; they are automatically accessible to all subnets in the VPC once created.

E: While managing security groups is important for controlling access to EC2 instances, it does not affect whether the traffic to and from DynamoDB traverses the internet. Security groups are more about controlling access (who can talk to whom) rather than routing traffic internally or externally.

24
Q

A gaming company has multiple Amazon EC2 instances in a single Availability Zone for its multiplayer game that communicates with users on Layer 4. The chief technology officer (CTO) wants to make the architecture highly available and cost-effective.
What should a solutions architect do to meet these requirements? (Choose two.)?

A. Increase the number of EC2 instances.

B. Decrease the number of EC2 instances.

C. Configure a Network Load Balancer in front of the EC2 instances.

D. Configure an Application Load Balancer in front of the EC2 instances.

E. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically.

A

A. Increase the Number of EC2 Instances: Simply increasing the number of instances without considering load balancing and multi-AZ deployment might improve performance but does not necessarily enhance availability or cost-effectiveness.

B. Decrease the Number of EC2 Instances: Reducing the number of instances without a scaling strategy could negatively impact both the performance and availability of the gaming application, especially during peak times.

D. Configure an Application Load Balancer (ALB) in Front of the EC2 Instances: An ALB operates at Layer 7 (application layer) and is more suited for HTTP/HTTPS traffic. Since the game communicates with users at Layer 4, a Network Load Balancer is more appropriate for this scenario.

24
Q

A company’s legacy application is currently relying on a single-instance Amazon RDS MySQL database without encryption. Due to new compliance requirements, all existing and new data in this database must be encrypted.
How should this be accomplished?

A. Create an Amazon S3 bucket with server-side encryption enabled. Move all the data to Amazon S3. Delete the RDS instance.

B. Enable RDS Multi-AZ mode with encryption at rest enabled. Perform a failover to the standby instance to delete the original instance.

C. Take a Snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot.

D. Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the application over to the new master. Delete the old RDS instance.

A

A. Create an Amazon S3 Bucket with Server-Side Encryption Enabled. Move All the Data to Amazon S3. Delete the RDS Instance: This approach involves changing the storage system entirely from RDS to S3, which is not a feasible solution for encrypting an existing RDS database. Moreover, this would require significant changes to the application to adapt to a completely different storage service (from a relational database to object storage), which is not practical in most cases.

B. Enable RDS Multi-AZ Mode with Encryption at Rest Enabled. Perform a Failover to the Standby Instance to Delete the Original Instance: Enabling Multi-AZ mode and encryption for a new standby instance does not automatically encrypt the existing primary instance. Multi-AZ deployments primarily provide high availability rather than a method to encrypt an existing database. The failover to a standby instance does not address the requirement to encrypt the existing data.

D. Create an RDS Read Replica with Encryption at Rest Enabled. Promote the Read Replica to Master and Switch the Application Over to the New Master. Delete the Old RDS Instance: While creating a read replica with encryption enabled is a viable approach, this method is more complex and primarily used for scalability or for serving read-only traffic. Also, you cannot create an encrypted read replica from an unencrypted master instance directly, so this approach does not meet the requirement of encrypting the existing data.

24
Q

A company has been storing analytics data in an Amazon RDS instance for the past few years. The company asked a solutions architect to find a solution that allows users to access this data using an API. The expectation is that the application will experience periods of inactivity but could receive bursts of traffic within seconds.
Which solution should the solutions architect suggest?

A. Set up an Amazon API Gateway and use Amazon ECS.

B. Set up an Amazon API Gateway and use AWS Elastic Beanstalk.

C. Set up an Amazon API Gateway and use AWS Lambda functions.

D. Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling.

A

A. Use Amazon ECS: Amazon Elastic Container Service (ECS) is a container management service. While it can handle variable traffic, it is more complex to manage compared to serverless options and is better suited for containerized applications.

B. Use AWS Elastic Beanstalk: Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. However, for workloads with intermittent and unpredictable traffic patterns, managing and scaling Elastic Beanstalk environments may not be as efficient and cost-effective as using AWS Lambda.

D. Use Amazon EC2 with Auto Scaling: While EC2 instances with Auto Scaling can handle varying loads, this approach requires managing servers and scaling policies. It’s generally more suitable for applications with predictable traffic patterns and not the best fit for workloads with long periods of inactivity and sudden bursts of traffic.

24
Q

A company’s website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The website has a mix of dynamic and static content. Users around the globe are reporting that the website is slow. Which set of actions will improve website performance for users worldwide?

A. Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution.

B. Create a latency-based Amazon Route 53 record for the ALB. Then launch new EC2 instances with larger instance sizes and register the instances with the ALB.

C. Launch new EC2 instances hosting the same web application in different Regions closer to the users. Then register instances with the same ALB using cross- Region VPC peering.

D. Host the website in an Amazon S3 bucket in the Regions closest to the users and delete the ALB and EC2 instances. Then update an Amazon Route 53 record to point to the S3 buckets.

A

B. Latency-based Amazon Route 53 Record for ALB with Larger EC2 Instances: While increasing EC2 instance sizes may improve processing power, it doesn’t address the global latency issue. Also, a latency-based Route 53 record for an ALB in a single region won’t significantly reduce latency for global users.

C. Launch EC2 Instances in Different Regions with Cross-Region VPC Peering: Hosting the application in multiple regions may reduce latency, but managing cross-region VPC peering and synchronization across regions is complex and may not be necessary for a website with a mix of dynamic and static content.

D. Host Website in Amazon S3 Buckets in Multiple Regions: While S3 can serve static content effectively, it’s not suitable for dynamic content without the use of additional services. Also, managing multiple S3 buckets in different regions and deleting the ALB and EC2 instances would require significant changes to the infrastructure and might not be feasible.

24
Q

A company must generate sales reports at the beginning of every month. The reporting process launches 20 Amazon EC2 instances on the first of the month. The process runs for 7 days and cannot be interrupted. The company wants to minimize costs.
Which pricing model should the company choose?

A. Reserved Instances

B. Spot Block Instances

C. On-Demand Instances

D. Scheduled Reserved Instances

A

A. Reserved Instances: Regular Reserved Instances offer a discount compared to On-Demand pricing, but they are typically purchased for continuous use (1-year or 3-year terms), which isn’t fully aligned with the company’s requirement of using instances for only 7 days each month.

B. Spot Block Instances: Spot Instances, including Spot Blocks, offer significant savings, but they come with the risk of being interrupted if the market price exceeds the bid price. Since the company’s process cannot be interrupted, this option is risky.

C. On-Demand Instances: While On-Demand Instances provide the flexibility of paying for compute capacity by the hour without long-term commitments, they are more expensive compared to Reserved Instances, and especially Scheduled Reserved Instances, for predictable, recurring usage patterns like this one.

25
Q

A company currently operates a web application backed by an Amazon RDS MySQL database. It has automated backups that are run daily and are not encrypted. A security audit requires future backups to be encrypted and the unencrypted backups to be destroyed. The company will make at least one encrypted backup before destroying the old backups.
What should be done to enable encryption for future backups?

A. Enable default encryption for the Amazon S3 bucket where backups are stored.

B. Modify the backup section of the database configuration to toggle the Enable encryption check box.

C. Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot.

D. Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary. Remove the original database instance.

A

A. Enable Default Encryption for the Amazon S3 Bucket Where Backups are Stored: Amazon RDS managed backups are not directly exposed to users in S3 buckets in a way that they can apply S3 bucket policies or encryption settings. RDS handles backups internally, and encryption needs to be set at the RDS level, not the S3 bucket level.

B. Modify the Backup Section of the Database Configuration to Toggle the Enable Encryption Checkbox: In Amazon RDS, you cannot directly enable encryption for an existing DB instance by modifying its backup configuration. Encryption needs to be enabled at the instance level, which typically requires creating a new encrypted instance from a snapshot as described in option C.

D. Enable an Encrypted Read Replica on RDS for MySQL. Promote the Encrypted Read Replica to Primary. Remove the Original Database Instance: This approach is more complex and typically used for scalability or read-heavy workloads. While it could achieve the goal of an encrypted database, it’s not the most straightforward method for the sole purpose of encrypting backups.

26
Q

A company is hosting a website behind multiple Application Load Balancers. The company has different distribution rights for its content around the world. A solutions architect needs to ensure that users are served the correct content without violating distribution rights.
Which configuration should the solutions architect choose to meet these requirements?

A. Configure Amazon CloudFront with AWS WAF.

B. Configure Application Load Balancers with AWS WAF.

C. Configure Amazon Route 53 with a geolocation policy.

D. Configure Amazon Route 53 with a geoproximity routing policy.

A

B. Configure Application Load Balancers with AWS WAF: While ALBs can be used with AWS WAF for filtering incoming traffic based on various criteria, they do not inherently provide global content distribution or geolocation-based content serving capabilities. ALBs are more suited for load balancing of incoming application traffic across multiple targets in a region.

C. Configure Amazon Route 53 with a Geolocation Policy: Route 53 with a geolocation policy can route traffic based on the location of the users. However, it primarily routes traffic rather than controlling content access based on distribution rights. It’s more about directing users to the nearest endpoint and does not have the integrated web application firewall capabilities.

D. Configure Amazon Route 53 with a Geoproximity Routing Policy: Geoproximity routing lets you choose where traffic will be sent based on the geographic location of your users and your resources. However, like the geolocation policy, it is more focused on traffic routing for performance optimization rather than enforcing content distribution rights.

27
Q

A solutions architect has created a new AWS account and must secure AWS account root user access.
Which combination of actions will accomplish this? (Choose two.)

A. Ensure the root user uses a strong password.

B. Enable multi-factor authentication to the root user.

C. Store root user access keys in an encrypted Amazon S3 bucket.

D. Add the root user to a group containing administrative permissions.

E. Apply the required permissions to the root user with an inline policy document.

A

C. Store Root User Access Keys in an Encrypted Amazon S3 Bucket: It’s generally recommended to not use root user access keys at all. Instead, use IAM users for everyday access to AWS. If you must use root user access keys, they should be handled with extreme caution, but storing them in an S3 bucket, even if encrypted, is not a best practice.

D. Add the Root User to a Group Containing Administrative Permissions: The root user inherently has full access to all resources in the AWS account and cannot be restricted by IAM permissions or added to IAM groups. This action is not applicable to the root user.

E. Apply the Required Permissions to the Root User with an Inline Policy Document: The root user account always has full administrative permissions and these permissions cannot be limited or changed. Therefore, applying permissions with an inline policy is not applicable to the root user.

27
Q

A solutions architect at an ecommerce company wants to back up application log data to Amazon S3. The solutions architect is unsure how frequently the logs will be accessed or which logs will be accessed the most. The company wants to keep costs as low as possible by using the appropriate S3 storage class.
Which S3 storage class should be implemented to meet these requirements?

A. S3 Glacier

B. S3 Intelligent-Tiering

C. S3 Standard-Infrequent Access (S3 Standard-IA)

D. S3 One Zone-Infrequent Access (S3 One Zone-IA)

A

A. S3 Glacier: This storage class is more suitable for long-term data archiving where data retrieval can be infrequent and potentially delayed (since data retrieval can take minutes to hours). It’s not ideal for log files that might need to be accessed without prior knowledge of which logs will be needed.

C. S3 Standard-Infrequent Access (S3 Standard-IA): While this storage class offers lower storage costs for infrequently accessed data, it doesn’t automatically adjust to changing access patterns like Intelligent-Tiering does. If the logs end up being accessed more frequently, costs could be higher than necessary.

D. S3 One Zone-Infrequent Access (S3 One Zone-IA): This is similar to S3 Standard-IA but stores data in only one Availability Zone. It’s generally used for data that can be recreated if lost and is not recommended for critical data backup. Also, like Standard-IA, it doesn’t adapt to changing access patterns.

28
Q

A company’s website is used to sell products to the public. The site runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). There is also an Amazon CloudFront distribution, and AWS WAF is being used to protect against SQL injection attacks. The ALB is the origin for the CloudFront distribution. A recent review of security logs revealed an external malicious IP that needs to be blocked from accessing the website. What should a solutions architect do to protect the application?

A. Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address.

B. Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address.

C. Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP address.

D. Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP address.

A

A. Modify the Network ACL on the CloudFront Distribution: Amazon CloudFront doesn’t use network ACLs in the same way that VPCs do. The proper way to control access to a CloudFront distribution is through AWS WAF, not network ACLs.

C. Modify the Network ACL for the EC2 Instances in the Target Groups Behind the ALB: Network ACLs are less granular and more complex to manage for this purpose. They are stateless and evaluate both inbound and outbound traffic, making them less ideal for blocking specific IPs compared to AWS WAF.

D. Modify the Security Groups for the EC2 Instances in the Target Groups Behind the ALB: Security groups act as a virtual firewall at the instance level, not at the edge level (like CloudFront with AWS WAF). While you could technically block an IP here, it’s more efficient to stop malicious traffic at the perimeter (i.e., at CloudFront/WAF level) before it even reaches your infrastructure.

29
Q

A solutions architect is designing an application for a two-step order process. The first step is synchronous and must return to the user with little latency. The second step takes longer, so it will be implemented in a separate component. Orders must be processed exactly once and in the order in which they are received.
How should the solutions architect integrate these components?

A. Use Amazon SQS FIFO queues.

B. Use an AWS Lambda function along with Amazon SQS standard queues.

C. Create an SNS topic and subscribe an Amazon SQS FIFO queue to that topic.

D. Create an SNS topic and subscribe an Amazon SQS Standard queue to that topic.

A

B. Use an AWS Lambda Function Along with Amazon SQS Standard Queues: While AWS Lambda is suitable for handling asynchronous processes, using it with SQS Standard queues doesn’t guarantee the order of message processing or exactly-once delivery, as Standard queues provide at-least-once delivery and do not guarantee the order of messages.

C. Create an SNS Topic and Subscribe an Amazon SQS FIFO Queue to That Topic: While this setup could technically work, SNS is generally used for publish/subscribe scenarios with multiple subscribers. If the requirement is simply to process orders in sequence and exactly once, adding SNS into the architecture might be unnecessary complexity.

D. Create an SNS Topic and Subscribe an Amazon SQS Standard Queue to That Topic: This option, like option B, doesn’t guarantee order or exactly-once processing due to the nature of Standard queues.

30
Q

A web application is deployed in the AWS Cloud. It consists of a two-tier architecture that includes a web layer and a database layer. The web server is vulnerable to cross-site scripting (XSS) attacks.
What should a solutions architect do to remediate the vulnerability?

A. Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.

B. Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.

C. Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.

D. Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield Standard.

A

A. Classic Load Balancer with AWS WAF: The Classic Load Balancer is a previous generation AWS load balancer. While it can handle HTTP/HTTPS traffic, it does not offer the advanced routing and WAF integration capabilities of an ALB.

B. Network Load Balancer with AWS WAF: Network Load Balancers (NLBs) are primarily used for TCP traffic where high performance and low latency are required. NLBs operate at Layer 4 (Transport Layer) and are not designed for application-level (Layer 7) traffic management and protection, like XSS attacks. Also, AWS WAF cannot be directly integrated with NLBs.

D. Application Load Balancer with AWS Shield Standard: AWS Shield Standard provides protection against DDoS attacks but does not offer the same level of application-specific security against web exploits like XSS as AWS WAF. While using an ALB is appropriate, AWS Shield Standard is not tailored to protect against XSS attacks.

31
Q

A company’s website is using an Amazon RDS MySQL Multi-AZ DB instance for its transactional data storage. There are other internal systems that query this DB instance to fetch data for internal batch processing. The RDS DB instance slows down significantly when the internal systems fetch data. This impacts the website’s read and write performance, and the users experience slow response times.
Which solution will improve the website’s performance?

A. Use an RDS PostgreSQL DB instance instead of a MySQL database.

B. Use Amazon ElastiCache to cache the query responses for the website.

C. Add an additional Availability Zone to the current RDS MySQL Multi-AZ DB instance.

D. Add a read replica to the RDS DB instance and configure the internal systems to query the read replica.

A

A. Use an RDS PostgreSQL DB Instance Instead of a MySQL Database: Simply changing the database engine from MySQL to PostgreSQL might not address the fundamental issue of high load due to read queries from internal systems. The solution needs to address the architecture of how the database is accessed rather than the database engine type.

B. Use Amazon ElastiCache to Cache the Query Responses for the Website: While caching can improve the performance of read-heavy applications, it may not be the optimal solution if the slowdown is caused by internal systems performing large or complex queries. Caching is typically more effective for frequent and similar read requests, not for internal batch processing tasks.

C. Add an Additional Availability Zone to the Current RDS MySQL Multi-AZ DB Instance: Multi-AZ deployments provide high availability and failover support but do not directly improve performance issues caused by high read traffic. The read traffic would still be directed to the primary DB instance in a Multi-AZ setup.

32
Q

An application runs on Amazon EC2 instances across multiple Availability Zones. The instances run in an Amazon EC2 Auto Scaling group behind an Application
Load Balancer. The application performs best when the CPU utilization of the EC2 instances is at or near 40%.
What should a solutions architect do to maintain the desired performance across all instances in the group?

A. Use a simple scaling policy to dynamically scale the Auto Scaling group.

B. Use a target tracking policy to dynamically scale the Auto Scaling group.

C. Use an AWS Lambda function to update the desired Auto Scaling group capacity.

D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group.

A

A. Use a Simple Scaling Policy: While a simple scaling policy can respond to changes in a specific metric, it’s less sophisticated than target tracking. Simple policies typically involve manually setting thresholds for scaling actions, which might not be as effective in maintaining a specific target metric like CPU utilization.

C. Use an AWS Lambda Function to Update the Desired Auto Scaling Group Capacity: This approach would require custom development and wouldn’t be as responsive or efficient as using built-in Auto Scaling policies. It adds unnecessary complexity when a built-in solution (target tracking) can meet the requirements.

D. Use Scheduled Scaling Actions: Scheduled scaling is useful when you know specific times where scaling actions should occur (like predictable high and low traffic periods). However, for maintaining a consistent performance metric like CPU utilization, it’s less effective compared to a target tracking policy, which can respond in real-time to changing conditions.

33
Q

A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs well by mid-morning.
How should the scaling be changed to address the staff complaints and keep costs to a minimum?

A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens.

B. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period.

C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period.

D. Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens.

A

B. Implement a Step Scaling Action Triggered at a Lower CPU Threshold, and Decrease the Cooldown Period: Step scaling reacts to changes in demand based on specific CloudWatch metrics, such as CPU utilization. While this could improve responsiveness to load changes, it might not fully address the morning performance issue, as scaling up only happens after an increase in load is detected, which could still result in slow performance.

C. Implement a Target Tracking Action Triggered at a Lower CPU Threshold, and Decrease the Cooldown Period: Like step scaling, target tracking responds to changes in demand. However, it also may not adequately address the initial slow performance in the morning, as it requires a trigger (like increased CPU utilization) to initiate scaling.

D. Implement a Scheduled Action that Sets the Minimum and Maximum Capacity to 20 Shortly Before the Office Opens: Setting both the minimum and maximum capacity to 20 ensures that 20 instances will always be running during work hours, which might be unnecessary and could lead to higher costs, especially if the full capacity is not always needed.

34
Q

A financial services company has a web application that serves users in the United States and Europe. The application consists of a database tier and a web server tier. The database tier consists of a MySQL database hosted in us-east-1. Amazon Route 53 geoproximity routing is used to direct traffic to instances in the closest Region. A performance review of the system reveals that European users are not receiving the same level of query performance as those in the United
States.
Which changes should be made to the database tier to improve performance?

A. Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in one of the European Regions.

B. Migrate the database to Amazon DynamoDB. Use DynamoDB global tables to enable replication to additional Regions.

C. Deploy MySQL instances in each Region. Deploy an Application Load Balancer in front of MySQL to reduce the load on the primary instance.

D. Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in one of the European Regions.

A

A. Migrate to Amazon RDS for MySQL with Multi-AZ in Europe: While using Amazon RDS with a Multi-AZ deployment in Europe will provide high availability, it doesn’t address the issue of latency for European users when accessing a database primarily hosted in the US.

B. Migrate to Amazon DynamoDB with Global Tables: While DynamoDB global tables provide multi-region, fully replicated, fast access, it would require migrating to a completely different database technology (NoSQL). This could involve significant changes to the application and might not be necessary if the application’s needs are met by a relational database like MySQL.

C. Deploy MySQL Instances in Each Region with an ALB: Deploying MySQL instances in each region and using an Application Load Balancer (ALB) is not a typical or recommended architecture for database scalability and global performance. ALBs are primarily used for distributing HTTP/HTTPS traffic across multiple targets in the same region and are not suitable for database load balancing.

35
Q

A company hosts a static website on-premises and wants to migrate the website to AWS. The website should load as quickly as possible for users around the world. The company also wants the most cost-effective solution.
What should a solutions architect do to accomplish this?

A. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Replicate the S3 bucket to multiple AWS Regions.

B. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin.

C. Copy the website content to an Amazon EBS-backed Amazon EC2 instance running Apache HTTP Server. Configure Amazon Route 53 geolocation routing policies to select the closest origin.

D. Copy the website content to multiple Amazon EBS-backed Amazon EC2 instances running Apache HTTP Server in multiple AWS Regions. Configure Amazon CloudFront geolocation routing policies to select the closest origin.

A

A. Replicate the S3 Bucket to Multiple AWS Regions: While replication improves data durability, it isn’t necessary for a static website’s performance and can increase costs. CloudFront effectively distributes content globally without the need for multi-region S3 replication.

C. Amazon EBS-Backed EC2 Instance with Route 53 Geolocation: This approach requires managing web servers (EC2 instances) and does not leverage the benefits of a CDN. It’s more complex and less cost-effective than using S3 with CloudFront.

D. Multiple EC2 Instances in Multiple Regions with CloudFront: Hosting the website on multiple EC2 instances across regions is overkill for a static website and not cost-effective. It involves significant operational overhead compared to using S3 and CloudFront.

36
Q

A solutions architect is designing storage for a high performance computing (HPC) environment based on Amazon Linux. The workload stores and processes a large amount of engineering drawings that require shared storage and heavy computing.
Which storage option would be the optimal solution?

A. Amazon Elastic File System (Amazon EFS)

B. Amazon FSx for Lustre

C. Amazon EC2 instance store

D. Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1)

A

A. Amazon Elastic File System (Amazon EFS): While EFS provides a scalable, elastic, and shared file system, it is not specifically optimized for the high-speed requirements of HPC workloads like FSx for Lustre.

C. Amazon EC2 Instance Store: Instance store provides temporary block-level storage directly attached to the host computer. However, it is not a shared storage solution and is ephemeral, meaning the data is lost if the instance is stopped or terminated.

D. Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1): While io1 volumes provide high performance with provisioned IOPS, they are block storage volumes that are not natively designed for file sharing across multiple EC2 instances.

37
Q

A company is performing an AWS Well-Architected Framework review of an existing workload deployed on AWS. The review identified a public-facing website running on the same Amazon EC2 instance as a Microsoft Active Directory domain controller that was install recently to support other AWS services. A solutions architect needs to recommend a new design that would improve the security of the architecture and minimize the administrative demand on IT staff.
What should the solutions architect recommend?

A. Use AWS Directory Service to create a managed Active Directory. Uninstall Active Directory on the current EC2 instance.

B. Create another EC2 instance in the same subnet and reinstall Active Directory on it. Uninstall Active Directory.

C. Use AWS Directory Service to create an Active Directory connector. Proxy Active Directory requests to the Active domain controller running on the current EC2 instance.

D. Enable AWS Single Sign-On (AWS SSO) with Security Assertion Markup Language (SAML) 2.0 federation with the current Active Directory controller. Modify the EC2 instance’s security group to deny public access to Active Directory.

A

B. Create Another EC2 Instance for Active Directory: While separating the domain controller from the web server is a good practice, managing Active Directory on EC2 instances requires significant administrative effort and does not leverage the benefits of a managed service.

C. Active Directory Connector with Existing EC2 Instance: An AD Connector is typically used to proxy directory requests to an existing on-premises Active Directory. In this case, it would still leave the Active Directory domain controller on the same instance as the public-facing website, which is not ideal for security.

D. Enable AWS SSO with SAML 2.0 Federation: While AWS SSO is a good solution for centralized access management, it does not address the core issue of having the Active Directory domain controller on the same EC2 instance as the public website. Modifying the security group to deny public access to Active Directory does not change this fundamental issue.

38
Q

A company hosts a static website within an Amazon S3 bucket. A solutions architect needs to ensure that data can be recovered in case of accidental deletion.
Which action will accomplish this?

A. Enable Amazon S3 versioning.

B. Enable Amazon S3 Intelligent-Tiering.

C. Enable an Amazon S3 lifecycle policy.

D. Enable Amazon S3 cross-Region replication.

A

B. Enable Amazon S3 Intelligent-Tiering: Intelligent-Tiering is a storage class designed to optimize costs by automatically moving data to the most cost-effective access tier, based on access patterns. It does not provide versioning or protect against accidental deletion.

C. Enable an Amazon S3 Lifecycle Policy: Lifecycle policies are used to automate moving objects to different storage classes or deleting objects after a certain period. While useful for managing object lifecycle and costs, they do not inherently protect against accidental deletion.

D. Enable Amazon S3 Cross-Region Replication: Cross-Region replication replicates objects across buckets in different AWS Regions. While it provides geographical redundancy, it does not protect against accidental deletion. If an object is deleted in the source bucket, the deletion can also be replicated to the destination bucket (depending on the configuration).

39
Q

A company’s production application runs online transaction processing (OLTP) transactions on an Amazon RDS MySQL DB instance. The company is launching a new reporting tool that will access the same data. The reporting tool must be highly available and not impact the performance of the production application.
How can this be achieved?

A. Create hourly snapshots of the production RDS DB instance.

B. Create a Multi-AZ RDS Read Replica of the production RDS DB instance.

C. Create multiple RDS Read Replicas of the production RDS DB instance. Place the Read Replicas in an Auto Scaling group.

D. Create a Single-AZ RDS Read Replica of the production RDS DB instance. Create a second Single-AZ RDS Read Replica from the replica.

A

A. Create Hourly Snapshots of the Production RDS DB Instance: Snapshots are point-in-time backups of the database and do not provide real-time data access for the reporting tool. They also involve a restore process to access the data, which is not suitable for real-time reporting needs.

C. Create Multiple RDS Read Replicas in an Auto Scaling Group: While using multiple Read Replicas can distribute the read load, RDS does not support Auto Scaling groups for Read Replicas. Auto Scaling is typically used for scaling compute capacity for Amazon EC2 instances.

D. Create a Single-AZ RDS Read Replica and a Second Single-AZ RDS Read Replica from the Replica: While this approach provides additional read capacity, it does not offer the high availability benefits of a Multi-AZ deployment. If the first Read Replica fails, the second one does not automatically take over.

40
Q

A company runs an application in a branch office within a small data closet with no virtualized compute resources. The application data is stored on an NFS volume. Compliance standards require a daily offsite backup of the NFS volume.
Which solution meets these requirements?

A. Install an AWS Storage Gateway file gateway on premises to replicate the data to Amazon S3.

B. Install an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data to Amazon S3.

C. Install an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data to Amazon S3.

D. Install an AWS Storage Gateway volume gateway with cached volumes on premises to replicate the data to Amazon S3.

A

A. AWS Storage Gateway file gateway is used for file-level access, and it is a suitable solution for replicating data to Amazon S3. However, in this scenario, the hardware appliance (option B) provides a more reliable and dedicated solution for a branch office without virtualized compute resources.

C. AWS Storage Gateway volume gateway with stored volumes is typically used for block-level access and may not be the best fit for this file-level NFS volume backup requirement.

D. AWS Storage Gateway volume gateway with cached volumes is again primarily designed for block-level access and may not be the most efficient solution for replicating an NFS volume to Amazon S3.

41
Q

A company’s web application is using multiple Linux Amazon EC2 instances and storing data on Amazon Elastic Block Store (Amazon EBS) volumes. The company is looking for a solution to increase the resiliency of the application in case of a failure and to provide storage that complies with atomicity, consistency, isolation, and durability (ACID). What should a solutions architect do to meet these requirements?

A. Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance.

B. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance.

C. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon Elastic File System (Amazon EFS) and mount a target on each instance.

D. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).

A

A. While launching the application in each Availability Zone and attaching EBS volumes to each EC2 instance would provide redundancy, it does not inherently provide storage that complies with ACID requirements. EBS volumes are typically used as block-level storage and may not be the best choice for ensuring data consistency and durability across multiple instances and Availability Zones.

B. Creating an Application Load Balancer with Auto Scaling groups across multiple Availability Zones is a good way to increase resiliency. However, mounting instance stores on each EC2 instance does not provide durable storage. Instance stores are ephemeral and do not offer the durability and consistency required for ACID compliance.

D. Storing data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) may not provide the desired durability and resiliency, as S3 One Zone-IA stores data in a single Availability Zone, which may not meet the requirement for high availability across Availability Zones.

42
Q

A security team to limit access to specific services or actions in all of the team’s AWS accounts. All accounts belong to a large organization in AWS Organizations.
The solution must be scalable and there must be a single point where permissions can be maintained.
What should a solutions architect do to accomplish this?

A. Create an ACL to provide access to the services or actions.

B. Create a security group to allow accounts and attach it to user groups.

C. Create cross-account roles in each account to deny access to the services or actions.

D. Create a service control policy in the root organizational unit to deny access to the services or actions.

A

A. ACLs (Access Control Lists) are typically used for controlling access to specific AWS resources, not for managing access to services or actions across AWS accounts in an organization.

B. Security groups are used to control inbound and outbound traffic for Amazon EC2 instances and are not suitable for managing access to AWS services or actions across accounts.

C. Cross-account roles can be used for granting access across AWS accounts, but in this case, you want to deny access to specific services or actions. Creating cross-account roles to deny access can be complex and may not provide a centralized way to manage permissions.

43
Q

A data science team requires storage for nightly log processing. The size and number of logs is unknown and will persist for 24 hours only.
What is the MOST cost-effective solution?

A. Amazon S3 Glacier

B. Amazon S3 Standard

C. Amazon S3 Intelligent-Tiering

D. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

A

A. Amazon S3 Glacier: Amazon S3 Glacier is designed for long-term archival and data that is rarely accessed. It offers significant cost savings but comes with retrieval times that can take several hours to retrieve data. Since your data needs to be processed nightly and you require immediate access, the retrieval times associated with Glacier would not be suitable for your use case. It’s more suitable for data that can be archived for extended periods and where retrieval latency is not critical.

C. Amazon S3 Intelligent-Tiering: Amazon S3 Intelligent-Tiering is designed to automatically move objects between different storage classes based on access patterns to optimize costs. While it can be a good choice for data with varying access patterns, it may not provide significant cost savings for short-lived logs that are processed nightly and require immediate access. Intelligent-Tiering might introduce additional complexity without substantial cost benefits for this specific use case.

D. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA): Amazon S3 One Zone-IA is similar to Amazon S3 Standard-IA but stores data in a single availability zone, which means it doesn’t have the same level of durability as the standard S3 classes. It is designed for infrequently accessed data with lower cost compared to S3 Standard but with a reduced level of redundancy. For short-lived logs that persist for 24 hours, S3 One Zone-IA may provide cost savings but doesn’t offer the same level of data durability as S3 Standard. If data loss due to a single availability zone failure is a concern, it may not be suitable.