Q001-050 Flashcards
A solutions architect is designing a solution where users will be directed to a backup static error page if the primary website is unavailable. The primary website’s DNS records are hosted in Amazon Route 53 where their domain is pointing to an Application Load Balancer (ALB). Which configuration should the solutions architect use to meet the company’s needs while minimizing changes and infrastructure overhead?
A. Point a Route 53 alias record to an Amazon CloudFront distribution with the ALB as one of its origins. Then, create custom error pages for the distribution.
B. Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page hosted within an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.
C. Update the Route 53 record to use a latency-based routing policy. Add the backup static error page hosted within an Amazon S3 bucket to the record so the traffic is sent to the most responsive endpoints.
D. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance hosting a static error page as endpoints. Route 53 will only send requests to the instance if the health checks fail for the ALB.
A. CloudFront Distribution with Custom Error Pages: While this is a viable solution for high availability and performance enhancement through CDN, it adds more complexity and isn’t necessary just for redirecting users to a static error page in case of primary site failure.
C. Latency-Based Routing Policy: This policy is used to route traffic based on the lowest network latency for your end user. It’s not suitable for a failover scenario where the primary concern is the availability of the primary website.
D. Active-Active Configuration with EC2 Instance: An active-active configuration with an EC2 instance as a backup for a static error page is overkill in terms of cost and management. Using an EC2 instance for a static error page is not cost-effective compared to using S3.
A solutions architect is designing a high performance computing (HPC) workload on Amazon EC2. The EC2 instances need to communicate to each other frequently and require network performance with low latency and high throughput.
Which EC2 configuration meets these requirements?
A. Launch the EC2 instances in a cluster placement group in one Availability Zone.
B. Launch the EC2 instances in a spread placement group in one Availability Zone.
C. Launch the EC2 instances in an Auto Scaling group in two Regions and peer the VPCs.
D. Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones.
B. Spread Placement Group: A spread placement group ensures that instances are placed on distinct underlying hardware and are suitable for applications that need high availability, but they do not specifically offer low-latency, high-throughput networking.
C. Auto Scaling Group in Two Regions and VPC Peering: Using multiple Regions will significantly increase the latency due to geographical distance, which is not suitable for HPC workloads requiring fast inter-node communication.
D. Auto Scaling Group Spanning Multiple Availability Zones: While this offers high availability, the increased latency between Availability Zones makes it less suitable for HPC workloads that require low latency.
A company wants to host a scalable web application on AWS. The application will be accessed by users from different geographic regions of the world.
Application users will be able to download and upload unique data up to gigabytes in size. The development team wants a cost-effective solution to minimize upload and download latency and maximize performance.
What should a solutions architect do to accomplish this?
A. Use Amazon S3 with Transfer Acceleration to host the application.
B. Use Amazon S3 with CacheControl headers to host the application.
C. Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
D. Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.
A. Amazon S3 with Transfer Acceleration: While S3 with Transfer Acceleration speeds up the transfer of files over long distances between the client and an S3 bucket, it primarily optimizes the transfer to the bucket and doesn’t address the scalability and performance of the web application itself.
B. Amazon S3 with CacheControl Headers: While CacheControl headers can help with caching static content, they don’t provide the same level of global content delivery optimization as CloudFront. Also, this doesn’t address the scalable hosting of the application.
D. Amazon EC2 with Auto Scaling and Amazon ElastiCache: ElastiCache is mainly used for caching frequently accessed data to improve read performance, not for optimizing large file transfers across geographical regions.
A company is migrating from an on-premises infrastructure to the AWS Cloud. One of the company’s applications stores files on a Windows file server farm that uses Distributed File System Replication (DFSR) to keep data in sync. A solutions architect needs to replace the file server farm.
Which service should the solutions architect use?
A. Amazon EFS
B. Amazon FSx
C. Amazon S3
D. AWS Storage Gateway
A. Amazon EFS: Amazon Elastic File System (EFS) is primarily designed for Linux-based applications and doesn’t support Windows file system features like DFSR. It’s not suitable for applications that are tightly integrated with Windows file system services.
C. Amazon S3: While Amazon Simple Storage Service (S3) is a highly scalable object storage service, it’s not a file system and doesn’t provide the file system interface or features (like DFSR) required by the existing Windows-based application.
D. AWS Storage Gateway: Storage Gateway connects on-premises environments with cloud-based storage. It’s more of a data migration and hybrid storage solution rather than a direct replacement for a Windows file server farm. It doesn’t inherently provide a managed Windows file system with DFSR support.
A company has a legacy application that processes data in two parts. The second part of the process takes longer than the first, so the company has decided to rewrite the application as two microservices running on Amazon ECS that can scale independently. How should a solutions architect integrate the microservices?
A. Implement code in microservice 1 to send data to an Amazon S3 bucket. Use S3 event notifications to invoke microservice 2.
B. Implement code in microservice 1 to publish data to an Amazon SNS topic. Implement code in microservice 2 to subscribe to this topic.
C. Implement code in microservice 1 to send data to Amazon Kinesis Data Firehose. Implement code in microservice 2 to read from Kinesis Data Firehose.
D. Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue.
A. Amazon S3 with Event Notifications: While using S3 and event notifications is a valid approach for triggering processes, it’s more suited for scenarios involving file storage and changes. It’s not as efficient for continuous inter-service communication, especially for applications that require more immediate processing of individual messages or data points.
B. Amazon SNS Topic: Amazon Simple Notification Service (SNS) is useful for pub/sub scenarios and fan-out messaging patterns. However, it doesn’t inherently provide the queue-based workload management that SQS offers, which is beneficial in handling varying processing times between microservices.
C. Amazon Kinesis Data Firehose: Kinesis is designed for real-time streaming data and is more complex and costlier for simple inter-service communication. It’s overkill for most microservice architectures unless there’s a specific need for streaming processing.
A company captures clickstream data from multiple websites and analyzes it using batch processing. The data is loaded nightly into Amazon Redshift and is consumed by business analysts. The company wants to move towards near-real-time data processing for timely insights. The solution should process the streaming data with minimal effort and operational overhead.
Which combination of AWS services are MOST cost-effective for this solution? (Choose two.)
A. Amazon EC2
B. AWS Lambda
C. Amazon Kinesis Data Streams
D. Amazon Kinesis Data Firehose
E. Amazon Kinesis Data Analytics
A. Amazon EC2: While EC2 offers flexibility and control, it requires significant management and operational overhead for scaling, monitoring, and maintaining servers, which is contrary to the requirement of minimal effort.
B. AWS Lambda: Lambda is useful for running code in response to events, but in this scenario, managing the flow and processing of streaming data is more efficiently and cost-effectively handled by Kinesis services.
E. Amazon Kinesis Data Analytics: Although this is a powerful tool for analyzing streaming data using SQL or Apache Flink, the primary need here is the real-time collection and delivery of data. The analysis part is already handled by business analysts using Amazon Redshift.
A company’s application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. On the first day of every month at midnight, the application becomes much slower when the month-end financial calculation batch executes. This causes the CPU utilization of the EC2 instances to immediately peak to 100%, which disrupts the application.
What should a solutions architect recommend to ensure the application is able to handle the workload and avoid downtime?
A. Configure an Amazon CloudFront distribution in front of the ALB.
B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization.
C. Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.
D. Configure Amazon ElastiCache to remove some of the workload from the EC2 instances.
A. Amazon CloudFront Distribution in Front of the ALB: CloudFront is a Content Delivery Network (CDN) primarily used to cache and deliver static and dynamic content at edge locations. While it can reduce the load on the servers by caching content, it wouldn’t be effective in this scenario, as the issue is related to CPU-intensive batch processing, not content delivery.
B. Auto Scaling Simple Scaling Policy Based on CPU Utilization: A simple scaling policy that triggers based on CPU utilization would reactively add more instances after the CPU utilization spikes. This reactive approach might not scale up the infrastructure quickly enough to handle the sudden increase in load, leading to potential performance issues.
D. Amazon ElastiCache: While ElastiCache can improve application performance by caching frequently accessed data, the problem in this scenario is related to CPU-intensive processing tasks. Unless the performance issue is due to database load that can be alleviated by caching, ElastiCache may not address the core issue.
A company runs a multi-tier web application that hosts news content. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones and use an Amazon Aurora database. A solutions architect needs to make the application more resilient to periodic increases in request rates.
Which architecture should the solutions architect implement? (Choose two.)
A. Add AWS Shield.
B. Add Aurora Replica.
C. Add AWS Direct Connect.
D. Add AWS Global Accelerator.
E. Add an Amazon CloudFront distribution in front of the Application Load Balancer.
A. AWS Shield: AWS Shield provides protection against DDoS attacks. While it’s important for overall security, it doesn’t specifically address the issue of scaling to handle increased request rates.
C. AWS Direct Connect: Direct Connect provides a dedicated network connection from on-premises to AWS. It’s more about network consistency and reduced bandwidth costs than about scaling an application to handle increased traffic.
D. AWS Global Accelerator: Global Accelerator improves application availability and performance by directing user traffic to optimal endpoints. While it can enhance performance, it’s not as directly impactful as CloudFront for content delivery and Aurora Replicas for database scaling in this scenario.
An application running on AWS uses an Amazon Aurora Multi-AZ deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database.
What should the solutions architect do to separate the read requests from the write requests?
A. Enable read-through caching on the Amazon Aurora database.
B. Update the application to read from the Multi-AZ standby instance.
C. Create a read replica and modify the application to use the appropriate endpoint.
D. Create a second Amazon Aurora database and link it to the primary database as a read replica.
A. Enable read-through caching on the Amazon Aurora database: While caching can improve read performance, it does not fundamentally address the issue of separating read and write requests. High read volume can still impact the overall performance of the primary database.
B. Update the application to read from the Multi-AZ standby instance: In Aurora Multi-AZ deployments, the standby instance is not designed for scaling read operations. It primarily serves as a failover target to ensure high availability. Using it for reads would not be an effective solution and is not a recommended practice.
D. Create a second Amazon Aurora database and link it to the primary database as a read replica: Creating a completely separate Aurora database and linking it as a read replica introduces unnecessary complexity and potential synchronization challenges. It’s more efficient to use Aurora’s built-in read replica feature.
A recently acquired company is required to build its own infrastructure on AWS and migrate multiple applications to the cloud within a month. Each application has approximately 50 TB of data to be transferred. After the migration is complete, this company and its parent company will both require secure network connectivity with consistent throughput from their data centers to the applications. A solutions architect must ensure one-time data migration and ongoing network connectivity.
Which solution will meet these requirements?
A. AWS Direct Connect for both the initial transfer and ongoing connectivity.
B. AWS Site-to-Site VPN for both the initial transfer and ongoing connectivity.
C. AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity.
D. AWS Snowball for the initial transfer and AWS Site-to-Site VPN for ongoing connectivity.
A. AWS Direct Connect for Both Transfers and Connectivity: While Direct Connect provides a high-speed, dedicated network connection, using it for the initial transfer of 50 TB per application might not be time-efficient, especially within a short timeframe like a month.
B. AWS Site-to-Site VPN for Both Transfers and Connectivity: A Site-to-Site VPN would provide secure connectivity over the internet but may not offer the same level of throughput and performance consistency as Direct Connect. Also, transferring large amounts of data over a VPN might be too slow for the initial migration.
D. AWS Snowball for Initial Transfer and AWS Site-to-Site VPN for Ongoing Connectivity: While Snowball is a good choice for the initial transfer, relying on a Site-to-Site VPN for ongoing connectivity might not meet the need for consistent, high-throughput connectivity, especially for applications that require frequent, large-scale data transfers.
A company serves content to its subscribers across the world using an application running on AWS. The application has several Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB). Due to a recent change in copyright restrictions, the chief information officer (CIO) wants to block access for certain countries.
Which action will meet these requirements?
A. Modify the ALB security group to deny incoming traffic from blocked countries.
B. Modify the security group for EC2 instances to deny incoming traffic from blocked countries.
C. Use Amazon CloudFront to serve the application and deny access to blocked countries.
D. Use ALB listener rules to return access denied responses to incoming traffic from blocked countries.
A. Modify the ALB security group to deny incoming traffic from blocked countries: Security groups in AWS are associated with instances and provide stateful filtering of ingress/egress network traffic to instances. However, they don’t inherently have the capability to filter traffic based on geographic location. They work with IP addresses and IP ranges, but maintaining a list of IP ranges for each country is impractical and error-prone, as these can change frequently.
B. Modify the security group for EC2 instances to deny incoming traffic from blocked countries: This option has the same limitations as option A. Security groups do not natively support geolocation-based filtering. Moreover, directly exposing EC2 instances to internet traffic (even when filtered) is generally not a best practice in terms of security.
D. Use ALB listener rules to return access denied responses to incoming traffic from blocked countries: While ALB listener rules allow for routing decisions based on the content of the request (like headers and request paths), they do not support geolocation-based routing decisions. Therefore, this method is not feasible for blocking access based on the user’s country.
A company is creating a new application that will store a large amount of data. The data will be analyzed hourly and modified by several Amazon EC2 Linux instances that are deployed across multiple Availability Zones. The application team believes the amount of space needed will continue to grow for the next 6 months. Which set of actions should a solutions architect take to support these needs?
A. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the application instances.
B. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the application instances.
C. Store the data in Amazon S3 Glacier. Update the S3 Glacier vault policy to allow access to the application instances.
D. Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume shared between the application instances.
A. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the application instances: EBS volumes are great for single-instance storage with high performance, but they are inherently limited to being attached to one EC2 instance at a time (except for multi-attach enabled volumes, which have their own limitations). This would not work efficiently for multiple EC2 instances across multiple Availability Zones as required in the question.
C. Store the data in Amazon S3 Glacier: Amazon S3 Glacier is a low-cost storage service designed for data archiving and long-term backup. It is not suitable for scenarios where data needs to be accessed and modified frequently, as it has retrieval times ranging from minutes to hours. This makes it inappropriate for the needs of an application that requires hourly analysis and modification of data.
D. Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume shared between the application instances: While EBS Provisioned IOPS volumes offer high performance for I/O-intensive workloads, they cannot be natively shared across multiple EC2 instances in different Availability Zones. The requirement for multiple instances to modify data across Availability Zones makes this option unsuitable.
A company is migrating a three-tier application to AWS. The application requires a MySQL database. In the past, the application users reported poor application performance when creating new entries. These performance issues were caused by users generating different real-time reports from the application during working hours. Which solution will improve the performance of the application when it is moved to AWS?
A. Import the data into an Amazon DynamoDB table with provisioned capacity. Refactor the application to use DynamoDB for reports.
B. Create the database on a compute optimized Amazon EC2 instance. Ensure compute resources exceed the on-premises database.
C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the application to use the reader endpoint for reports.
D. Create an Amazon Aurora MySQL Multi-AZ DB cluster. Configure the application to use the backup instance of the cluster as an endpoint for the reports.
A. While DynamoDB offers high performance at scale, it is a NoSQL database service, which differs significantly from a relational database like MySQL. Refactoring the application to use DynamoDB could be resource-intensive and may not be necessary if the only issue is performance during reporting. Additionally, DynamoDB’s data model and query capabilities are different from MySQL, which could lead to significant changes in how the application handles data.
B. This approach might improve performance compared to the on-premises setup. However, managing a database on EC2 instances requires handling many aspects like backups, failover, patching, and scalability manually. This option doesn’t provide the best scalability and high availability compared to managed database services.
D. This option is not feasible because the backup instance in a Multi-AZ deployment is not designed for direct querying or load balancing. It is a standby replica used for failover purposes and is not accessible for read queries under normal operations.
A solutions architect is deploying a distributed database on multiple Amazon EC2 instances. The database stores all data on multiple instances so it can withstand the loss of an instance. The database requires block storage with latency and throughput to support several million transactions per second per server.
Which storage solution should the solutions architect use?
A. EBS Amazon Elastic Block Store (Amazon EBS)
B. Amazon EC2 instance store
C. Amazon Elastic File System (Amazon EFS)
D. Amazon S3
A. EBS Amazon Elastic Block Store (Amazon EBS): EBS provides high performance block storage service suitable for both throughput and latency-sensitive transactions. However, even though EBS volumes like Provisioned IOPS SSD (io2) can offer high performance, they might not be able to meet the extreme performance requirement of several million transactions per second per server, especially considering the network latency involved in accessing EBS volumes, which are network-attached storage.
C. Amazon Elastic File System (Amazon EFS): EFS provides scalable file storage for use with AWS Cloud services and on-premises resources. While it’s good for many use cases, it is not optimized for the extremely high transaction rates described in the scenario. EFS is more suitable for use cases where shared file storage is needed.
D. Amazon S3: Amazon S3 is an object storage service and not suitable for database storage requiring block-level storage and high transaction rates. It is designed for durability, storing large amounts of data, and easy access, not for high transactional workloads.
Organizers for a global event want to put daily reports online as static HTML pages. The pages are expected to generate millions of views from users around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution.
Which action should the solutions architect take to accomplish this?
A. Generate presigned URLs for the files.
B. Use cross-Region replication to all Regions.
C. Use the geoproximity feature of Amazon Route 53.
D. Use Amazon CloudFront with the S3 bucket as its origin.
A. Generate presigned URLs for the files: Presigned URLs are typically used to securely share private files from S3 buckets for a limited time. This approach isn’t suitable for public access to static content like HTML pages intended for millions of users, as it would require generating and managing a large number of temporary URLs.
B. Use cross-Region replication to all Regions: Cross-Region replication involves replicating data across different AWS Regions. While this can enhance data availability and durability, it’s not the most efficient way to distribute static content globally. It would require complex management and incur additional costs without providing the latency and performance benefits of a content delivery network (CDN).
C. Use the geoproximity feature of Amazon Route 53: Route 53’s geoproximity routing lets you choose where traffic will be sent based on the geographic location of your users and your resources. However, this is more about routing traffic to different endpoints, rather than efficiently serving static content. It does not provide the caching and global distribution benefits of a CDN.
A solutions architect is designing a new service behind Amazon API Gateway. The request patterns for the service will be unpredictable and can change suddenly from 0 requests to over 500 per second. The total size of the data that needs to be persisted in a backend database is currently less than 1 GB with unpredictable future growth. Data can be queried using simple key-value requests. Which combination of AWS services would meet these requirements? (Choose two.)
A. AWS Fargate
B. AWS Lambda
C. Amazon DynamoDB
D. Amazon EC2 Auto Scaling
E. MySQL-compatible Amazon Aurora
A. AWS Fargate: While Fargate is a serverless compute engine for containers, it’s more suitable for application scenarios where you need more control over the environment and dependencies. It’s not as straightforward and efficient as Lambda for unpredictable, bursty traffic patterns.
D. Amazon EC2 Auto Scaling: EC2 instances with Auto Scaling can handle varying load by adjusting the number of EC2 instances. However, this approach requires more management and isn’t as efficient in scaling rapidly to sudden spikes in traffic compared to AWS Lambda.
E. MySQL-compatible Amazon Aurora: Although Aurora provides high performance and scalability, it is a relational database service, which might be an overkill for simple key-value data storage needs. Also, managing a relational database could be more complex and less cost-effective for this use case compared to DynamoDB.
A start-up company has a web application based in the us-east-1 Region with multiple Amazon EC2 instances running behind an Application Load Balancer across multiple Availability Zones. As the company’s user base grows in the us-west-1 Region, it needs a solution with low latency and high availability.
What should a solutions architect do to accomplish this?
A. Provision EC2 instances in us-west-1. Switch the Application Load Balancer to a Network Load Balancer to achieve cross-Region load balancing.
B. Provision EC2 instances and an Application Load Balancer in us-west-1. Make the load balancer distribute the traffic based on the location of the request.
C. Provision EC2 instances and configure an Application Load Balancer in us-west-1. Create an accelerator in AWS Global Accelerator that uses an endpoint group that includes the load balancer endpoints in both Regions.
D. Provision EC2 instances and configure an Application Load Balancer in us-west-1. Configure Amazon Route 53 with a weighted routing policy. Create alias records in Route 53 that point to the Application Load Balancer.
A. While this approach does provision resources in us-west-1 to reduce latency, AWS Network Load Balancers (NLB) do not support cross-Region load balancing. NLBs operate within a single region, so this approach would not provide the desired outcome of distributing traffic across regions.
B. While provisioning EC2 instances and an ALB in us-west-1 is a good step, ALBs do not inherently distribute traffic based on the geographic location of the request. They route traffic within a single region and do not have built-in capabilities for global traffic distribution based on location.
D. This option involves setting up the infrastructure in us-west-1 and using Route 53 for DNS routing. However, a weighted routing policy doesn’t inherently consider the geographic location of the user for routing decisions. It’s more about distributing traffic between different resources based on assigned weights, and not necessarily for latency optimization.
A solutions architect is designing a solution to access a catalog of images and provide users with the ability to submit requests to customize images. Image customization parameters will be in any request sent to an AWS API Gateway. The customized image will be generated on demand, and users will receive a link they can click to view or download their customized image. The solution must be highly available for viewing and customizing images. What is the MOST cost-effective solution to meet these requirements?
A. Use Amazon EC2 instances to manipulate the original image into the requested customizations. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances.
B. Use AWS Lambda to manipulate the original image to the requested customizations. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin.
C. Use AWS Lambda to manipulate the original image to the requested customizations. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances.
D. Use Amazon EC2 instances to manipulate the original image into the requested customizations. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin.
A. Amazon EC2 Instances for Image Manipulation: While EC2 instances can be used for image manipulation, they are generally more expensive and require more management compared to AWS Lambda. You need to manage scaling, ensure high availability, and you pay for continuous running of instances, even if there’s no demand.
C. Storing Manipulated Images in Amazon DynamoDB: DynamoDB is a NoSQL database service, not typically used for storing images. Storing large objects like images in DynamoDB is not cost-effective and not aligned with its intended use case.
D. EC2 Instances and DynamoDB Storage: This option combines the less desirable elements of A and C – using EC2 for image manipulation and DynamoDB for storing images, which would not be as cost-effective or as efficient as using Lambda and S3.
A company is planning to migrate a business-critical dataset to Amazon S3. The current solution design uses a single S3 bucket in the us-east-1 Region with versioning enabled to store the dataset. The company’s disaster recovery policy states that all data multiple AWS Regions.
How should a solutions architect design the S3 solution?
A. Create an additional S3 bucket in another Region and configure cross-Region replication.
B. Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS).
C. Create an additional S3 bucket with versioning in another Region and configure cross-Region replication.
D. Create an additional S3 bucket with versioning in another Region and configure cross-origin resource (CORS).
A. Create an additional S3 bucket in another Region and configure cross-Region replication: While this option does create a bucket in another region and sets up cross-region replication, it doesn’t mention enabling versioning. Versioning is an important feature for maintaining the integrity of the data, especially in a business-critical dataset. It keeps multiple versions of an object in one bucket, which is useful for recovery in case of accidental deletion or overwriting.
B. Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS): CORS is a mechanism that allows many resources (e.g., fonts, JavaScript, etc.) on a web page to be requested from another domain outside the domain from which the resource originated. This option is not relevant to the requirement of replicating data for disaster recovery purposes.
D. Create an additional S3 bucket with versioning in another Region and configure cross-origin resource sharing (CORS): Similar to option B, CORS is not relevant for data replication and disaster recovery. While this option includes versioning, which is good, it does not mention cross-Region replication, which is key to fulfilling the disaster recovery requirement.
A company has application running on Amazon EC2 instances in a VPC. One of the applications needs to call an Amazon S3 API to store and read objects. The company’s security policies restrict any internet-bound traffic from the applications.
Which action will fulfill these requirements and maintain security?
A. Configure an S3 interface endpoint.
B. Configure an S3 gateway endpoint.
C. Create an S3 bucket in a private subnet.
D. Create an S3 bucket in the same Region as the EC2 instance.
A. Configure an S3 interface endpoint: An interface endpoint (powered by AWS PrivateLink) enables you to connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. However, for S3, a gateway endpoint is a more efficient and cost-effective solution compared to an interface endpoint.
C. Create an S3 bucket in a private subnet: Amazon S3 buckets are not created within a VPC or its subnets. S3 is a global service, and its buckets are not confined to VPCs or subnets. Therefore, this option is not applicable or possible.
D. Create an S3 bucket in the same Region as the EC2 instance: While it’s generally a good practice to create an S3 bucket in the same region as the EC2 instances for latency and cost considerations, merely creating a bucket in the same region does not address the security requirement of restricting internet-bound traffic. A VPC endpoint is needed for private connectivity.