More Test Questions - 2 Flashcards
A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images. A Solutions Architect has been asked to recommend a solution for hosting the website. Which solution is the MOST cost-effective?
1: Containerize the website and host it in AWS Fargate
2: Create an Amazon S3 bucket and host the website there
3: Deploy a web server on an Amazon EC2 instance to host the website
4: Configure an Application Load Balancer with an AWS Lambda target
1: Containerize the website and host it in AWS Fargate
2: Create an Amazon S3 bucket and host the website there
3: Deploy a web server on an Amazon EC2 instance to host the website
4: Configure an Application Load Balancer with an AWS Lambda target
A company requires a solution to allow customers to customize images that are stored in an online catalog. The image customization parameters will be sent in requests to Amazon API Gateway. The customized image will then be generated on-demand and can be accessed online. The solutions architect requires a highly available solution. Which solution will be MOST cost-effective?
1: Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances
2: Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
3: Use AWS Lambda to manipulate the original images to the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances
4: Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
1: Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances
2: Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
3: Use AWS Lambda to manipulate the original images to the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances
4: Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
A solutions architect is finalizing the architecture for a distributed database that will run across multiple Amazon EC2 instances. Data will be replicated across all instances so the loss of an instance will not cause loss of data. The database requires block storage with low latency and throughput that supports up to several million transactions per second per server. Which storage solution should the solutions architect use?
1: Amazon EBS
2: Amazon EC2 instance store
3: Amazon EFS
4: Amazon S3
1: Amazon EBS
2: Amazon EC2 instance store
3: Amazon EFS
4: Amazon S3
A website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The website’s DNS records are hosted in Amazon Route 53 with the domain name pointing to the ALB. A solution is required for displaying a static error page if the website becomes unavailable. Which configuration should a solutions architect use to meet these requirements with the LEAST operational overhead?
1: Create a Route 53 alias record for an Amazon CloudFront distribution and specify the ALB as the origin. Create custom error pages for the distribution
2: Create a Route 53 active-passive failover configuration. Create a static website using an Amazon S3 bucket that hosts a static error page. Configure the static website as the passive record for failover
3: Create a Route 53 weighted routing policy. Create a static website using an Amazon S3 bucket that hosts a static error page. Configure the record for the S3 static website with a weighting of zero. When an issue occurs increase the weighting
4: Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance hosting a static error page as endpoints. Route 53 will only send requests to the instance if the health checks fail for the ALB
1: Create a Route 53 alias record for an Amazon CloudFront distribution and specify the ALB as the origin. Create custom error pages for the distribution
2: Create a Route 53 active-passive failover configuration. Create a static website using an Amazon S3 bucket that hosts a static error page. Configure the static website as the passive record for failover
3: Create a Route 53 weighted routing policy. Create a static website using an Amazon S3 bucket that hosts a static error page. Configure the record for the S3 static website with a weighting of zero. When an issue occurs increase the weighting
4: Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance hosting a static error page as endpoints. Route 53 will only send requests to the instance if the health checks fail for the ALB
A company is deploying a new web application that will run on Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. The application requires a shared storage solution that offers strong consistency as the content will be regularly updated. Which solution requires the LEAST amount of effort?
1: Create an Amazon S3 bucket to store the web content and use Amazon CloudFront to deliver the content
2: Create an Amazon Elastic File System (Amazon EFS) file system and mount it on the individual Amazon EC2 instances
3: Create a shared Amazon Block Store (Amazon EBS) volume and mount it on the individual Amazon EC2 instances
4: Create a volume gateway using AWS Storage Gateway to host the data and mount it to the Auto Scaling group
1: Create an Amazon S3 bucket to store the web content and use Amazon CloudFront to deliver the content
2: Create an Amazon Elastic File System (Amazon EFS) file system and mount it on the individual Amazon EC2 instances
3: Create a shared Amazon Block Store (Amazon EBS) volume and mount it on the individual Amazon EC2 instances
4: Create a volume gateway using AWS Storage Gateway to host the data and mount it to the Auto Scaling group
A website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The website has a mix of dynamic and static content. Customers around the world are reporting performance issues with the website. Which set of actions will improve website performance for users worldwide?
1: Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution
2: Create a latency-based Amazon Route 53 record for the ALB. Then launch new EC2 instances with larger instance sizes and register the instances with the ALB
3: Launch new EC2 instances hosting the same web application in different Regions closer to the users. Use an AWS Transit Gateway to connect customers to the closest region
4: Migrate the website to an Amazon S3 bucket in the Regions closest to the users. Then create an Amazon Route 53 geolocation record to point to the S3 buckets
1: Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution
2: Create a latency-based Amazon Route 53 record for the ALB. Then launch new EC2 instances with larger instance sizes and register the instances with the ALB
3: Launch new EC2 instances hosting the same web application in different Regions closer to the users. Use an AWS Transit Gateway to connect customers to the closest region
4: Migrate the website to an Amazon S3 bucket in the Regions closest to the users. Then create an Amazon Route 53 geolocation record to point to the S3 buckets
A web application has recently been launched on AWS. The architecture includes two tier with a web layer and a database later. It has been identified that the web server layer may be vulnerable to cross-site scripting (XSS) attacks. What should a solutions architect do to remediate the vulnerability?
1: Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
2: Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
3: Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
4: Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield Standard
1: Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
2: Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
3: Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
4: Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield Standard
A static website currently runs in a company’s on-premises data center. The company plan to migrate the website to AWS. The website must load quickly for global users and the solution must also be cost-effective. What should a solutions architect do to accomplish this?
1: Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Replicate the S3 bucket to multiple AWS Regions
2: Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin
3: Copy the website content to an Amazon EC2 instance. Configure Amazon Route 53 geolocation routing policies to select the closest origin
4: Copy the website content to multiple Amazon EC2 instances in multiple AWS Regions. Configure AWS Route 53 geolocation routing policies to select the closest region
1: Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Replicate the S3 bucket to multiple AWS Regions
2: Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin
3: Copy the website content to an Amazon EC2 instance. Configure Amazon Route 53 geolocation routing policies to select the closest origin
4: Copy the website content to multiple Amazon EC2 instances in multiple AWS Regions. Configure AWS Route 53 geolocation routing policies to select the closest region
A multi-tier application runs with eight front-end web servers in an Amazon EC2 Auto Scaling group in a single Availability Zone behind an Application Load Balancer. A solutions architect needs to modify the infrastructure to be highly available without modifying the application. Which architecture should the solutions architect choose that provides high availability?
1: Create an Auto Scaling group that uses four instances across each of two Regions
2: Modify the Auto Scaling group to use four instances across each of two Availability Zones
3: Create an Auto Scaling template that can be used to quickly create more instances in another Region
4: Create an Auto Scaling group that uses four instances across each of two subnets
1: Create an Auto Scaling group that uses four instances across each of two Regions
2: Modify the Auto Scaling group to use four instances across each of two Availability Zones
3: Create an Auto Scaling template that can be used to quickly create more instances in another Region
4: Create an Auto Scaling group that uses four instances across each of two subnets
A company’s web application is using multiple Amazon EC2 Linux instances and storing data on Amazon EBS volumes. The company is looking for a solution to increase the resiliency of the application in case of a failure. What should a solutions architect do to meet these requirements?
1: Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance
2: Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance
3: Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance
4: Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-A)
1: Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance
2: Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance
3: Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance
4: Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-A)
A website runs on a Microsoft Windows server in an on-premises data center. The web server is being migrated to Amazon EC2 Windows instances in multiple Availability Zones on AWS. The web server currently uses data stored in an on-premises network-attached storage (NAS) device. Which replacement to the NAS file share is MOST resilient and durable?
1: Migrate the file share to Amazon EBS
2: Migrate the file share to AWS Storage Gateway
3: Migrate the file share to Amazon FSx for Windows File Server
4: Migrate the file share to Amazon Elastic File System (Amazon EFS)
1: Migrate the file share to Amazon EBS
2: Migrate the file share to AWS Storage Gateway
3: Migrate the file share to Amazon FSx for Windows File Server
4: Migrate the file share to Amazon Elastic File System (Amazon EFS)
A company is planning a migration for a high performance computing (HPC) application and associated data from an on-premises data center to the AWS Cloud. The company uses tiered storage on premises with hot high-performance parallel storage to support the application during periodic runs of the application, and more economical cold storage to hold the data when the application is not actively running. Which combination of solutions should a solutions architect recommend to support the storage needs of the application? (Select TWO)
1: Amazon S3 for cold data storage
2: Amazon EFS for cold data storage
3: Amazon S3 for high-performance parallel storage
4: Amazon FSx for Lustre for high-performance parallel storage
5: Amazon FSx for Windows for high-performance parallel storage
1: Amazon S3 for cold data storage
2: Amazon EFS for cold data storage
3: Amazon S3 for high-performance parallel storage
4: Amazon FSx for Lustre for high-performance parallel storage
5: Amazon FSx for Windows for high-performance parallel storage
A web application that allows users to upload and share documents is running on a single Amazon EC2 instance with an Amazon EBS volume. To increase availability the architecture has been updated to use an Auto Scaling group of several instances across Availability Zones behind an Application Load Balancer. After the change users can only see a subset of the documents. What is the BEST method for a solutions architect to modify the solution so users can see all documents?
1: Run a script to synchronize the data between Amazon EBS volumes
2: Use Sticky Sessions with the ALB to ensure users are directed to the same EC2 instance in a session
3: Copy the data from all EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
4: Configure the Application Load Balancer to send the request to all servers. Return each document from the correct server
1: Run a script to synchronize the data between Amazon EBS volumes
2: Use Sticky Sessions with the ALB to ensure users are directed to the same EC2 instance in a session
3: Copy the data from all EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
4: Configure the Application Load Balancer to send the request to all servers. Return each document from the correct server
A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs well by midmorning. How should the scaling be changed to address the staff complaints and keep costs to a minimum?
1: Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens
2: Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period
3: Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period
4: Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens
1: Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens
2: Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period
3: Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period
4: Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens
An application uses Amazon EC2 instances and an Amazon RDS MySQL database. The database is not currently encrypted. A solutions architect needs to apply encryption to the database for all new and existing data. How should this be accomplished?
1: Create an Amazon ElastiCache cluster and encrypt data using the cache nodes
2: Enable encryption for the database using the API. Take a full snapshot of the database. Delete old snapshots
3: Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot
4: Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the application over to the new master. Delete the old RDS instance
1: Create an Amazon ElastiCache cluster and encrypt data using the cache nodes
2: Enable encryption for the database using the API. Take a full snapshot of the database. Delete old snapshots
3: Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot
4: Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the application over to the new master. Delete the old RDS instance
A company have 500 TB of data in an on-premises file share that needs to be moved to Amazon S3 Glacier. The migration must not saturate the company’s low-bandwidth internet connection and the migration must be completed within a few weeks. What is the MOST cost-effective solution?
1: Create an AWS Direct Connect connection and migrate the data straight into Amazon Glacier
2: Order 7 AWS Snowball appliances and select an S3 Glacier vault as the destination. Create a bucket policy to enforce a VPC endpoint
3: Use AWS Global Accelerator to accelerate upload and optimize usage of the available bandwidth
4: Order 7 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier
1: Create an AWS Direct Connect connection and migrate the data straight into Amazon Glacier
2: Order 7 AWS Snowball appliances and select an S3 Glacier vault as the destination. Create a bucket policy to enforce a VPC endpoint
3: Use AWS Global Accelerator to accelerate upload and optimize usage of the available bandwidth
4: Order 7 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier
A company has refactored a legacy application to run as two microservices using Amazon ECS. The application processes data in two parts and the second part of the process takes longer than the first. How can a solutions architect integrate the microservices and allow them to scale independently?
1: Implement code in microservice 1 to send data to an Amazon S3 bucket. Use S3 event notifications to invoke microservice 2
2: Implement code in microservice 1 to publish data to an Amazon SNS topic. Implement code in microservice 2 to subscribe to this topic
3: Implement code in microservice 1 to send data to Amazon Kinesis Data Firehose. Implement code in microservice 2 to read from Kinesis Data Firehose
4: Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue
1: Implement code in microservice 1 to send data to an Amazon S3 bucket. Use S3 event notifications to invoke microservice 2
2: Implement code in microservice 1 to publish data to an Amazon SNS topic. Implement code in microservice 2 to subscribe to this topic
3: Implement code in microservice 1 to send data to Amazon Kinesis Data Firehose. Implement code in microservice 2 to read from Kinesis Data Firehose
4: Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue
A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the company. How should security groups be configured in this situation? (Select TWO)
1: Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0
2: Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0
3: Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier
4: Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier
5: Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier
1: Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0
2: Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0
3: Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier
4: Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier
5: Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier
A solutions architect has created a new AWS account and must secure AWS account root user access. Which combination of actions will accomplish this? (Select TWO)
1: Ensure the root user uses a strong password
2: Enable multi-factor authentication to the root user
3: Store root user access keys in an encrypted Amazon S3 bucket
4: Add the root user to a group containing administrative permissions
5: Delete the root user account
1: Ensure the root user uses a strong password
2: Enable multi-factor authentication to the root user
3: Store root user access keys in an encrypted Amazon S3 bucket
4: Add the root user to a group containing administrative permissions
5: Delete the root user account
A company allows its developers to attach existing IAM policies to existing IAM roles to enable faster experimentation and agility. However, the security operations team is concerned that the developers could attach the existing administrator policy, which would allow the developers to circumvent any other security policies. How should a solutions architect address this issue?
1: Create an Amazon SNS topic to send an alert every time a developer creates a new policy
2: Use service control policies to disable IAM activity across all accounts in the organizational unit
3: Prevent the developers from attaching any policies and assign all IAM duties to the security operations team
4: Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy
1: Create an Amazon SNS topic to send an alert every time a developer creates a new policy
2: Use service control policies to disable IAM activity across all accounts in the organizational unit
3: Prevent the developers from attaching any policies and assign all IAM duties to the security operations team
4: Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy
A solutions architect is optimizing a website for real-time streaming and on-demand videos. The website’s users are located around the world and the solutions architect needs to optimize the performance for both the real-time and on-demand streaming. Which service should the solutions architect choose?
1: Amazon CloudFront
2: AWS Global Accelerator
3: Amazon Route 53
4: Amazon S3 Transfer Acceleration
1: Amazon CloudFront
2: AWS Global Accelerator
3: Amazon Route 53
4: Amazon S3 Transfer Acceleration
An organization is creating a new storage solution and needs to ensure that Amazon S3 objects that are deleted are immediately restorable for up to 30 days. After 30 days the objects should be retained for a further 180 days and be restorable within 24 hours. The solution should be operationally simple and cost-effective. How can these requirements be achieved? (Select TWO)
1: Enable object versioning on the Amazon S3 bucket that will contain the objects
2: Create a lifecycle rule to transition non-current versions to GLACIER after 30 days, and then expire the objects after 180 days
3: Enable multi-factor authentication (MFA) delete protection
4: Enable cross-region replication (CRR) for the Amazon S3 bucket that will contain the objects 5: Create a lifecycle rule to transition non-current versions to STANDARD_IA after 30 days, and then expire the objects after 180 days
1: Enable object versioning on the Amazon S3 bucket that will contain the objects
2: Create a lifecycle rule to transition non-current versions to GLACIER after 30 days, and then expire the objects after 180 days
3: Enable multi-factor authentication (MFA) delete protection
4: Enable cross-region replication (CRR) for the Amazon S3 bucket that will contain the objects 5: Create a lifecycle rule to transition non-current versions to STANDARD_IA after 30 days, and then expire the objects after 180 days
Objects uploaded to Amazon S3 are initially accessed frequently for a period of 30 days. Then, objects are infrequently accessed for up to 90 days. After that, the objects are no longer needed. How should lifecycle management be configured?
1: Transition to STANDARD_IA after 30 days. After 90 days transition to GLACIER
2: Transition to STANDARD_IA after 30 days. After 90 days transition to ONEZONE_IA
3: Transition to ONEZONE_IA after 30 days. After 90 days expire the objects
4: Transition to REDUCED_REDUNDANCY after 30 days. After 90 days expire the objects
1: Transition to STANDARD_IA after 30 days. After 90 days transition to GLACIER
2: Transition to STANDARD_IA after 30 days. After 90 days transition to ONEZONE_IA
3: Transition to ONEZONE_IA after 30 days. After 90 days expire the objects
4: Transition to REDUCED_REDUNDANCY after 30 days. After 90 days expire the objects
A company has acquired another business and needs to migrate their 50TB of data into AWS within 1 month. They also require a secure, reliable and private connection to the AWS cloud. How are these requirements best accomplished?
1: Provision an AWS Direct Connect connection and migrate the data over the link
2: Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct Connect link
3: Launch a Virtual Private Gateway (VPG) and migrate the data over the AWS VPN
4: Provision an AWS VPN CloudHub connection and migrate the data over redundant links
1: Provision an AWS Direct Connect connection and migrate the data over the link
2: Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct Connect link
3: Launch a Virtual Private Gateway (VPG) and migrate the data over the AWS VPN
4: Provision an AWS VPN CloudHub connection and migrate the data over redundant links
An application on Amazon Elastic Container Service (ECS) performs data processing in two parts. The second part takes much longer to complete. How can an Architect decouple the data processing from the backend application component?
1: Process both parts using the same ECS task. Create an Amazon Kinesis Firehose stream
2: Process each part using a separate ECS task. Create an Amazon SNS topic and send a notification when the processing completes
3: Create an Amazon DynamoDB table and save the output of the first part to the table
4: Process each part using a separate ECS task. Create an Amazon SQS queue
1: Process both parts using the same ECS task. Create an Amazon Kinesis Firehose stream
2: Process each part using a separate ECS task. Create an Amazon SNS topic and send a notification when the processing completes
3: Create an Amazon DynamoDB table and save the output of the first part to the table
4: Process each part using a separate ECS task. Create an Amazon SQS queue
An application is running on Amazon EC2 behind an Elastic Load Balancer (ELB). Content is being published using Amazon CloudFront and you need to restrict the ability for users to circumvent CloudFront and access the content directly through the ELB. How can you configure this solution?
1: Create an Origin Access Identity (OAI) and associate it with the distribution
2: Use signed URLs or signed cookies to limit access to the content
3: Use a Network ACL to restrict access to the ELB
4: Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change
1: Create an Origin Access Identity (OAI) and associate it with the distribution
2: Use signed URLs or signed cookies to limit access to the content
3: Use a Network ACL to restrict access to the ELB
4: Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change