SAA L2P 601-650 v24.021 Flashcards
QUESTION 650
A company has a financial application that produces reports. The reports average 50 KB in size
and are stored in Amazon S3. The reports are frequently accessed during the first week after
production and must be stored for several years. The reports must be retrievable within 6 hours.
Which solution meets these requirements MOST cost-effectively?
A. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days.
B. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent
Access (S3 Standard-IA) after 7 days.
C. Use S3 Intelligent-Tiering. Configure S3 Intelligent-Tiering to transition the reports to S3
Standard-Infrequent Access (S3 Standard-IA) and S3 Glacier.
D. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier Deep Archive
after 7 days.
A. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days.
Explanation:
Amazon S3 Glacier:
Expedited Retrieval: Provides access to data within 1-5 minutes.
Standard Retrieval: Provides access to data within 3-5 hours.
Bulk Retrieval: Provides access to data within 5-12 hours.
Amazon S3 Glacier Deep Archive:
Standard Retrieval: Provides access to data within 12 hours.
Bulk Retrieval: Provides access to data within 48 hours.
QUESTION 649
A company is using AWS Key Management Service (AWS KMS) keys to encrypt AWS Lambda
environment variables. A solutions architect needs to ensure that the required permissions are in
place to decrypt and use the environment variables.
Which steps must the solutions architect take to implement the correct permissions? (Choose
two.)
A. Add AWS KMS permissions in the Lambda resource policy.
B. Add AWS KMS permissions in the Lambda execution role.
C. Add AWS KMS permissions in the Lambda function policy.
D. Allow the Lambda execution role in the AWS KMS key policy.
E. Allow the Lambda resource policy in the AWS KMS key policy.
B. Add AWS KMS permissions in the Lambda execution role.
D. Allow the Lambda execution role in the AWS KMS key policy.
Explanation:
To decrypt environment variables encrypted with AWS KMS, Lambda needs to be granted
permissions to call KMS APIs. This is done in two places:
The Lambda execution role needs kms:Decrypt and kms:GenerateDataKey permissions added.
The execution role governs what AWS services the function code can access.
The KMS key policy needs to allow the Lambda execution role to have kms:Decrypt and
kms:GenerateDataKey permissions for that specific key. This allows the execution role to use that particular key.
QUESTION 648
A company has created a multi-tier application for its ecommerce website. The website uses an
Application Load Balancer that resides in the public subnets, a web tier in the public subnets, and
a MySQL cluster hosted on Amazon EC2 instances in the private subnets. The MySQL database
needs to retrieve product catalog and pricing information that is hosted on the internet by a third-
party provider. A solutions architect must devise a strategy that maximizes security without
increasing operational overhead.
What should the solutions architect do to meet these requirements?
A. Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance.
B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all
internet-bound traffic to the NAT gateway.
C. Configure an internet gateway and attach it to the VPModify the private subnet route table to
direct internet-bound traffic to the internet gateway.
D. Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table
to direct internet-bound traffic to the virtual private gateway.
B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all
internet-bound traffic to the NAT gateway.
QUESTION 647
A company has separate AWS accounts for its finance, data analytics, and development
departments. Because of costs and security concerns, the company wants to control which
services each AWS account can use.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Systems Manager templates to control which AWS services each department can use.
B. Create organization units (OUs) for each department in AWS Organizations. Attach service
control policies (SCPs) to the OUs.
C. Use AWS CloudFormation to automatically provision only the AWS services that each department
can use.
D. Set up a list of products in AWS Service Catalog in the AWS accounts to manage and control the
usage of specific AWS services.
B. Create organization units (OUs) for each department in AWS Organizations. Attach service
control policies (SCPs) to the OUs.
QUESTION 646
A company has data collection sensors at different locations. The data collection sensors stream
a high volume of data to the company. The company wants to design a platform on AWS to
ingest and process high-volume streaming data. The solution must be scalable and support data
collection in near real time. The company must store the data in Amazon S3 for future reporting.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3.
B. Use AWS Glue to deliver streaming data to Amazon S3.
C. Use AWS Lambda to deliver streaming data and store the data to Amazon S3.
D. Use AWS Database Migration Service (AWS DMS) to deliver streaming data to Amazon S3.
A. Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3.
Explanation:
Amazon Kinesis Data Firehose: Capture, transform, and load data streams into AWS data stores
(S3) in near real-time.
QUESTION 645
A recent analysis of a company’s IT expenses highlights the need to reduce backup costs. The
company’s chief information officer wants to simplify the on-premises backup infrastructure and
reduce costs by eliminating the use of physical backup tapes. The company must preserve the
existing investment in the on-premises backup applications and workflows.
What should a solutions architect recommend?
A. Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.
B. Set up an Amazon EFS file system that connects with the backup applications using the NFS
interface.
C. Set up an Amazon EFS file system that connects with the backup applications using the iSCSI
interface.
D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual
tape library (VTL) interface.
D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual
tape library (VTL) interface.
Explanation:
https://aws.amazon.com/storagegateway/vtl/?nc1=h_ls
QUESTION 644
A retail company uses a regional Amazon API Gateway API for its public REST APIs. The API
Gateway endpoint is a custom domain name that points to an Amazon Route 53 alias record. A
solutions architect needs to create a solution that has minimal effects on customers and minimal
data loss to release the new version of APIs.
Which solution will meet these requirements?
A. Create a canary release deployment stage for API Gateway. Deploy the latest API version. Point
an appropriate percentage of traffic to the canary stage. After API verification, promote the canary
stage to the production stage.
B. Create a new API Gateway endpoint with a new version of the API in OpenAPI YAML file format.
Use the import-to-update operation in merge mode into the API in API Gateway. Deploy the new
version of the API to the production stage.
C. Create a new API Gateway endpoint with a new version of the API in OpenAPI JSON file format.
Use the import-to-update operation in overwrite mode into the API in API Gateway. Deploy the
new version of the API to the production stage.
D. Create a new API Gateway endpoint with new versions of the API definitions. Create a custom
domain name for the new API Gateway API. Point the Route 53 alias record to the new API
Gateway API custom domain name.
A. Create a canary release deployment stage for API Gateway. Deploy the latest API version. Point
an appropriate percentage of traffic to the canary stage. After API verification, promote the canary
stage to the production stage.
In a canary release deployment, total API traffic is separated at random into a production release
and a canary release with a pre-configured ratio. Typically, the canary release receives a small
percentage of API traffic and the production release takes up the rest. The updated API features
are only visible to API traffic through the canary. You can adjust the canary traffic percentage to
optimize test coverage or performance.
https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html
QUESTION 643
A company runs Amazon EC2 instances in multiple AWS accounts that are individually bled. The company recently purchased a Savings Plan. Because of changes in the company’s business requirements, the company has decommissioned a large number of EC2 instances. The company wants to use its Savings Plan discounts on its other AWS accounts.
Which combination of steps will meet these requirements? (Choose two.)
A. From the AWS Account Management Console of the management account, turn on discount sharing from the billing preferences section.
B. From the AWS Account Management Console of the account that purchased the existing Savings Plan, turn on discount sharing from the billing preferences section. Include all accounts.
C. From the AWS Organizations management account, use AWS Resource Access Manager (AWS RAM) to share the Savings Plan with other accounts.
D. Create an organization in AWS Organizations in a new payer account. Invite the other AWS accounts to join the organization from the management account.
E. Create an organization in AWS Organizations in the existing AWS account with the existing EC2 instances and Savings Plan. Invite the other AWS accounts to join the organization from the management account.
A. From the AWS Account Management Console of the management account, turn on discount
sharing from the billing preferences section.
E. Create an organization in AWS Organizations in the existing AWS account with the existing EC2
instances and Savings Plan. Invite the other AWS accounts to join the organization from the
management account.
Explanation:
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ri-turn-off.html
QUESTION 642
A media company uses an Amazon CloudFront distribution to deliver content over the internet.
The company wants only premium customers to have access to the media streams and file
content. The company stores all content in an Amazon S3 bucket. The company also delivers
content on demand to customers for a specific purpose, such as movie rentals or music
downloads.
Which solution will meet these requirements?
A. Generate and provide S3 signed cookies to premium customers.
B. Generate and provide CloudFront signed URLs to premium customers.
C. Use origin access control (OAC) to limit the access of non-premium customers.
D. Generate and activate field-level encryption to block non-premium customers.
B. Generate and provide CloudFront signed URLs to premium customers.
QUESTION 641
A company wants to build a web application on AWS. Client access requests to the website are
not predictable and can be idle for a long time. Only customers who have paid a subscription fee
can have the ability to sign in and use the web application.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
A. Create an AWS Lambda function to retrieve user information from Amazon DynamoDB. Create
an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda
function.
B. Create an Amazon Elastic Container Service (Amazon ECS) service behind an Application Load
Balancer to retrieve user information from Amazon RDS. Create an Amazon API Gateway
endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
C. Create an Amazon Cognito user pool to authenticate users.
D. Create an Amazon Cognito identity pool to authenticate users.
E. Use AWS Amplify to serve the frontend web content with HTML, CSS, and JS. Use an integrated
Amazon CloudFront configuration.
F. Use Amazon S3 static web hosting with PHP, CSS, and JS. Use Amazon CloudFront to serve the
frontend web content.
A. Create an AWS Lambda function to retrieve user information from Amazon DynamoDB. Create
an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda
function.
C. Create an Amazon Cognito user pool to authenticate users.
E. Use AWS Amplify to serve the frontend web content with HTML, CSS, and JS. Use an integrated
Amazon CloudFront configuration.
Create a web application = AWS Amplify Sign in users = Amazon Cognito User Pool Traffic may be idle for a long time = AWS Lambda Amazon S3 does not support server-side scripts such as PHP, JSP, or ASP.NET. https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html? Icmpid=docs_amazon s3_console#:~:text=website%20relies%20on-,server%2Dside,-processing%2C%20including%20server Traffic may be idle for a long time = AWS Lambda Use the exclude method: No need to container (does not need to run all the time), remove B. PHP cannot run with static Amazon S3, remove F.(S3 doesn’t support server-side scripts, PHP is a server-side script)
Option B (Amazon ECS) is not the best option since the website “can be idle for a long time”, so Lambda (Option A) is a more cost-effective choice. Option D is incorrect because User pools are for authentication (identity verification) while Identity pools are for authorization (access control). Option F is wrong because S3 web hosting only supports static web files like HTML/CSS, and does not support PHP or JavaScript.
QUESTION 640
A company has an on-premises server that uses an Oracle database to process and store
customer information. The company wants to use an AWS database service to achieve higher
availability and to improve application performance. The company also wants to offload reporting
from its primary database system.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in
multiple AWS Regions. Point the reporting functions toward a separate DB instance from the
primary DB instance.
B. Use Amazon RDS in a Single-AZ deployment to create an Oracle database. Create a read replica
in the same zone as the primary DB instance. Direct the reporting functions to the read replica.
C. Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database.
Direct the reporting functions to use the reader instance in the cluster deployment.
D. Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora
database. Direct the reporting functions to the reader instances.
C. Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database.
Direct the reporting functions to use the reader instance in the cluster deployment.
https://aws.amazon.com/rds/oracle/#
QUESTION 639
A company wants to use the AWS Cloud to improve its on-premises disaster recovery (DR)
configuration. The company’s core production business application uses Microsoft SQL Server
Standard, which runs on a virtual machine (VM). The application has a recovery point objective
(RPO) of 30 seconds or fewer and a recovery time objective (RTO) of 60 minutes. The DR
solution needs to minimize costs wherever possible.
Which solution will meet these requirements?
A. Configure a multi-site active/active setup between the on-premises server and AWS by using
Microsoft SQL Server Enterprise with Always On availability groups.
B. Configure a warm standby Amazon RDS for SQL Server database on AWS. Configure AWS
Database Migration Service (AWS DMS) to use change data capture (CDC).
C. Use AWS Elastic Disaster Recovery configured to replicate disk changes to AWS as a pilot light.
D. Use third-party backup software to capture backups every night. Store a secondary set of
backups in Amazon S3.
C. Use AWS Elastic Disaster Recovery configured to replicate disk changes to AWS as a pilot light.
https://aws.amazon.com/tw/blogs/architecture/disaster-recovery-dr-architecture-on-aws-part-iii-pilot-light-and-warm-standby/
Other options:
B. Configure a warm standby Amazon RDS for SQL Server database on AWS. Configure AWS Database Migration Service (AWS DMS) to use change data capture (CDC). – RDS might not support all features. Not going like-for-like
QUESTION 638
A global video streaming company uses Amazon CloudFront as a content distribution network
(CDN). The company wants to roll out content in a phased manner across multiple countries. The
company needs to ensure that viewers who are outside the countries to which the company rolls
out content are not able to view the content.
Which solution will meet these requirements?
A. Add geographic restrictions to the content in CloudFront by using an allow list. Set up a custom
error message.
B. Set up a new URL tor restricted content. Authorize access by using a signed URL and cookies.
Set up a custom error message.
C. Encrypt the data for the content that the company distributes. Set up a custom error message.
D. Create a new URL for restricted content. Set up a time-restricted access policy for signed URLs.
A. Add geographic restrictions to the content in CloudFront by using an allow list. Set up a custom
error message.
Explanation:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html
QUESTION 637
A company runs a three-tier web application in the AWS Cloud that operates across three
Availability Zones. The application architecture has an Application Load Balancer, an Amazon
EC2 web server that hosts user session states, and a MySQL database that runs on an EC2
instance. The company expects sudden increases in application traffic. The company wants to be
able to scale to meet future application capacity demands and to ensure high availability across
all three Availability Zones.
Which solution will meet these requirements?
A. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment.
Use Amazon ElastiCache for Redis with high availability to store session data and to cache
reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
B. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment.
Use Amazon ElastiCache for Memcached with high availability to store session data and to cache
reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
C. Migrate the MySQL database to Amazon DynamoDB Use DynamoDB Accelerator (DAX) to cache reads. Store the session data in DynamoDB. Migrate the web server to an Auto Scaling
group that is in three Availability Zones.
D. Migrate the MySQL database to Amazon RDS for MySQL in a single Availability Zone. Use
Amazon ElastiCache for Redis with high availability to store session data and to cache reads.
Migrate the web server to an Auto Scaling group that is in three Availability Zones.
A. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment.
Use Amazon ElastiCache for Redis with high availability to store session data and to cache
reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
Explanation:
Memcached is best suited for caching data, while Redis is better for storing data that needs to be
persisted. If you need to store data that needs to be accessed frequently, such as user profiles,
session data, and application settings, then Redis is the better choice.
QUESTION 636
A company wants to provide data scientists with near real-time read-only access to the
company’s production Amazon RDS for PostgreSQL database. The database is currently
configured as a Single-AZ database. The data scientists use complex queries that will not affect
the production database. The company needs a solution that is highly available.
Which solution will meet these requirements MOST cost-effectively?
A. Scale the existing production database in a maintenance window to provide enough power for the
data scientists.
B. Change the setup from a Single-AZ to a Multi-AZ instance deployment with a larger secondary
standby instance. Provide the data scientists access to the secondary instance.
C. Change the setup from a Single-AZ to a Multi-AZ instance deployment. Provide two additional
read replicas for the data scientists.
D. Change the setup from a Single-AZ to a Multi-AZ cluster deployment with two readable standby
instances. Provide read endpoints to the data scientists.
D. Change the setup from a Single-AZ to a Multi-AZ cluster deployment with two readable standby
instances. Provide read endpoints to the data scientists.
Explanation:
Multi-AZ instance: the standby instance doesn’t serve any read or write traffic.
Multi-AZ DB cluster: consists of primary instance running in one AZ serving read-write traffic and
two other standby running in two different AZs serving read traffic.
https://aws.amazon.com/blogs/database/choose-the-right-amazon-rds-deployment-option-single-
az-instance-multi-az-instance-or-multi-az-database-cluster/
C would mean you are paying for 4 instances (primary, backup, and 2 read instances). D would be 3 (primary, and 2 backup).
multi AZ cluster have reader endpoint. multi AZ instance secondary replicate is not allow to access (need to investigate to ensure validity from link above)
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html
QUESTION 635
A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its
workloads. All secrets that are stored in Amazon EKS must be encrypted in the Kubernetes etcd
key-value store.
Which solution will meet these requirements?
A. Create a new AWS Key Management Service (AWS KMS) key. Use AWS Secrets Manager to
manage, rotate, and store all secrets in Amazon EKS.
B. Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.
C. Create the Amazon EKS cluster with default options. Use the Amazon Elastic Block Store
(Amazon EBS) Container Storage Interface (CSI) driver as an add-on.
D. Create a new AWS Key Management Service (AWS KMS) key with the alias/aws/ebs alias.
Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for the account.
B. Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.
EKS supports using AWS KMS keys to provide envelope encryption of Kubernetes secrets stored in EKS. Envelope encryption adds an addition, customer-managed layer of encryption for application secrets or user data that is stored within a Kubernetes cluster. https://eksctl.io/usage/kms-encryption/ option A does not enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster
Explanation:
https://docs.aws.amazon.com/eks/latest/userguide/enable-kms.html
QUESTION 634
A company wants to build a logging solution for its multiple AWS accounts. The company
currently stores the logs from all accounts in a centralized account. The company has created an
Amazon S3 bucket in the centralized account to store the VPC flow logs and AWS CloudTrail
logs. All logs must be highly available for 30 days for frequent analysis, retained for an additional
60 days for backup purposes, and deleted 90 days after creation.
Which solution will meet these requirements MOST cost-effectively?
A. Transition objects to the S3 Standard storage class 30 days after creation. Write an expiration
action that directs Amazon S3 to delete objects after 90 days.
B. Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class 30 days
after creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days.
Write an expiration action that directs Amazon S3 to delete objects after 90 days.
C. Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write
an expiration action that directs Amazon S3 to delete objects after 90 days.
D. Transition objects to the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class 30
days after creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90
days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
C. Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write
an expiration action that directs Amazon S3 to delete objects after 90 days.
QUESTION 633
A company stores data in Amazon S3. According to regulations, the data must not contain
personally identifiable information (PII). The company recently discovered that S3 buckets have some objects that contain PII. The company needs to automatically detect PII in S3 buckets and
to notify the company’s security team.
Which solution will meet these requirements?
A. Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData event type
from Macie findings and to send an Amazon Simple Notification Service (Amazon SNS)
notification to the security team.
B. Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type
from GuardDuty findings and to send an Amazon Simple Notification Service (Amazon SNS)
notification to the security team.
C. Use Amazon Macie. Create an Amazon EventBridge rule to filter the
SensitiveData:S3Object/Personal event type from Macie findings and to send an Amazon Simple
Queue Service (Amazon SQS) notification to the security team.
D. Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type
from GuardDuty findings and to send an Amazon Simple Queue Service (Amazon SQS)
notification to the security team.
A. Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData event type
from Macie findings and to send an Amazon Simple Notification Service (Amazon SNS)
QUESTION 632
A company has a workload in an AWS Region. Customers connect to and access the workload
by using an Amazon API Gateway REST API. The company uses Amazon Route 53 as its DNS
provider. The company wants to provide individual and secure URLs for all customers.
Which combination of steps will meet these requirements with the MOST operational efficiency?
(Choose three.)
A. Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53
hosted zone and record in the zone that points to the API Gateway endpoint.
B. Request a wildcard certificate that matches the domains in AWS Certificate Manager (ACM) in a
different Region.
C. Create hosted zones for each customer as required in Route 53. Create zone records that point
to the API Gateway endpoint.
D. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager
(ACM) in the same Region.
E. Create multiple API endpoints for each customer in API Gateway.
F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS
Certificate Manager (ACM).
A. Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53
hosted zone and record in the zone that points to the API Gateway endpoint.
D. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager
(ACM) in the same Region.
F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS
Certificate Manager (ACM).
Using a wildcard domain and certificate avoids managing individual domains/certs per customer. This is more efficient. The domain, hosted zone, and certificate should all be in the same region as the API Gateway REST API for simplicity. Creating multiple API endpoints per customer (Option E) adds complexity and is not required. Option B and C add unnecessary complexity by separating domains, certificates, and hosted zones.
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/AboutHZWorkingWith.html
Step A involves registering the required domain in a registrar and creating a wildcard custom domain name in a Route 53 hosted zone. This allows you to map individual and secure URLs for all customers to your API Gateway endpoints. Step D is to request a wildcard certificate from AWS Certificate Manager (ACM) that matches the custom domain name you created in Step A. This wildcard certificate will cover all subdomains and ensure secure HTTPS communication. Step F is to create a custom domain name in API Gateway for your REST API. This allows you to associate the custom domain name with your API Gateway endpoints and import the certificate from ACM for secure communication.
QUESTION 631
A company needs to integrate with a third-party data feed. The data feed sends a webhook to
notify an external service when new data is ready for consumption. A developer wrote an AWS
Lambda function to retrieve data when the company receives a webhook callback. The developer
must make the Lambda function available for the third party to call.
Which solution will meet these requirements with the MOST operational efficiency?
A. Create a function URL for the Lambda function. Provide the Lambda function URL to the third
party for the webhook.
B. Deploy an Application Load Balancer (ALB) in front of the Lambda function. Provide the ALB URL
to the third party for the webhook.
C. Create an Amazon Simple Notification Service (Amazon SNS) topic. Attach the topic to the
Lambda function. Provide the public hostname of the SNS topic to the third party for the webhook.
D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Attach the queue to the
Lambda function. Provide the public hostname of the SQS queue to the third party for the
webhook.
A. Create a function URL for the Lambda function. Provide the Lambda function URL to the third
party for the webhook.
Keyword “Lambda function” and “webhook”. See https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-saas-furls.html#create-stripe-cfn-stack
Explanation:
https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html