sthithapragnakk -- SAA Exam Dumps Jan 24 101-200 Flashcards
673 # A company has multiple AWS accounts in an organization in AWS organizations that use different business units. The company has several offices around the world. The company needs to update the security group rules to allow new office CIDR ranges or to remove old CIDR ranges across the organization. The company wants to centralize the management of security group rules to minimize the administrative overhead required by updating CIDR ranges. Which solution will meet these requirements in the MOST cost-effective way?
A. Create VPC security groups in the organization’s management account. Update security groups when a CIDR range update is necessary.
B. Create a VPC customer-managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across your organization. Use the prefix list in the security groups throughout your organization.
C. Create a list of prefixes managed by AWS. Use an AWS Security Hub policy to enforce security group updates across your organization. Use an AWS Lambda function to update the prefix list automatically when CIDR ranges change.
D. Create security groups in a central AWS administrative account. Create a common AWS Firewall Manager security group policy for your entire organization. Select the previously created security groups as primary groups in the policy.
B. Create a VPC customer-managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across your organization. Use the prefix list in the security groups throughout your organization.
A VPC customer-managed prefix list allows you to define a list of CIDR ranges that can be shared across AWS accounts and Regions. This provides a centralized way to manage CIDR ranges. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across your organization: AWS RAM allows you to share resources between AWS accounts, including prefix lists. By sharing the list of prefixes managed by the client, the management of CIDR ranges is centralized. Use the prefix list in organization-wide security groups: You can reference the shared prefix list in security group rules. This ensures that security groups in multiple AWS accounts use the same centralized set of CIDR ranges. This approach minimizes administrative overhead, enables centralized control, and provides a scalable solution for managing security group rules globally.
https://docs.aws.amazon.com/vpc/latest/userguide/managed-prefix-lists.htm
674 # A company uses an on-premises network attached storage (NAS) system to provide file shares to its high-performance computing (HPC) workloads. The company wants to migrate its latency-sensitive HPC workloads and their storage to the AWS cloud. The enterprise must be able to provide NFS and SMB multiprotocol access from the file system. Which solution will meet these requirements with the lowest latency? (Choose two.)
A. Deploy compute-optimized EC2 instances in a cluster placement group.
B. Deploy compute-optimized EC2 instances in a partition placement group.
C. Attach the EC2 instances to an Amazon FSx file system for Luster.
D. Connect the EC2 instances to an Amazon FSx file system for OpenZFS.
E. Connect the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
A. Deploy compute-optimized EC2 instances in a cluster placement group.
E. Connect the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
https://aws.amazon.com/fsx/when-to-choose-fsx/
Cluster placement groups allow you to group instances within a single availability zone to provide low-latency network performance. This is suitable for tightly coupled HPC workloads.
Amazon FSx for NetApp ONTAP supports both NFS and SMB protocols, making it suitable for multi-protocol access.
675 # A company is relocating its data center and wants to securely transfer 50TB of data to AWS within 2 weeks. The existing data center has a site-to-site VPN connection to AWS that is 90% utilized. Which AWS service should a solutions architect use to meet these requirements?
A. AWS DataSync with a VPC endpoint
B. AWS Direct Connect
C. Optimized AWS Snowball Edge Storage
D. AWS Storage Gateway
C. Optimized AWS Snowball Edge Storage
676 # A company hosts an application on Amazon EC2 on-demand instances in an auto-scaling group. The app’s peak times occur at the same time every day. App users report slow app performance at the start of peak times. The app normally works 2-3 hours after peak hours start. The company wants to make sure the app works properly at the start of peak times. What solution will meet these requirements?
A. Configure an application load balancer to correctly distribute traffic to the instances.
B. Configure a dynamic scaling policy so that the Auto Scaling group launches new instances based on memory utilization.
C. Configure a dynamic scaling policy for the Auto Scaling group to start new instances based on CPU utilization.
D. Configure a scheduled scaling policy so that the auto-scaling group starts new instances before peak times.
D. Configure a scheduled scaling policy so that the auto-scaling group starts new instances before peak times.
677 # A company runs applications on AWS that connect to the company’s Amazon RDS database. Applications scale on weekends and during peak times of the year. The company wants to scale the database more effectively for its applications that connect to the database. Which solution will meet these requirements with the LESS operating overhead?
A. Use Amazon DynamoDB with connection pooling with a target pool configuration for the database. Switch applications to use the DynamoDB endpoint.
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS proxy endpoint.
C. Use a custom proxy running on Amazon EC2 as the database broker. Change applications to use the custom proxy endpoint.
D. Use an AWS Lambda function to provide a connection pool with a target pool configuration for the database. Change applications to use the Lambda function.
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS proxy endpoint.
Amazon RDS Proxy is a managed database proxy that provides connection pooling, failover, and security features for database applications. It allows applications to scale more effectively and efficiently by managing database connections on their behalf. It integrates well with RDS and reduces operating expenses.
678 # A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block Store (Amazon EBS) storage and snapshot costs are increasing every month. However, the company does not purchase additional storage from EBS every month. The company wants to optimize monthly costs for its current storage usage. Which solution will meet these requirements with the LESS operating overhead?
A. Use Amazon CloudWatch Logs to monitor Amazon EBS storage utilization. Use Amazon EBS Elastic Volumes to reduce the size of EBS volumes.
B. Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of EBS volumes.
C. Delete all expired and unused snapshots to reduce snapshot costs.
D. Delete all non-essential snapshots. Use Amazon Data Lifecycle Manager to create and manage snapshots according to your company’s snapshot policy requirements.
D. Delete all non-essential snapshots. Use Amazon Data Lifecycle Manager to create and manage snapshots according to your company’s snapshot policy requirements.
Delete all nonessential snapshots: This reduces costs by eliminating unnecessary snapshot storage. Use Amazon Data Lifecycle Manager (DLM): DLM can automate the creation and deletion of snapshots based on defined policies. This reduces operational overhead by automating snapshot management according to the company’s snapshot policy requirements.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
679 # A company is developing a new application on AWS. The application consists of an Amazon Elastic Container Service (Amazon ECS) cluster, an Amazon S3 bucket that contains assets for the application, and an Amazon RDS for MySQL database that contains the application’s data set. The data set contains sensitive information. The company wants to ensure that only the ECS cluster can access data in the RDS for MySQL database and data in the S3 bucket. What solution will meet these requirements?
A. Create a new AWS Key Management Service (AWS KMS) customer-managed key to encrypt both the S3 bucket and the RDS database for MySQL. Ensure that the KMS key policy includes encryption and decryption permissions for the ECS task execution role.
B. Create an AWS Key Management Service (AWS KMS) AWS Managed Key to encrypt both the S3 bucket and the RDS database for MySQL. Ensure that the S3 bucket policy specifies the ECS task execution role as user.
C. Create an S3 bucket policy that restricts bucket access to the ECS task execution role. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS security group for MySQL to allow access only from the subnets on which the ECS cluster will generate tasks.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS security group for MySQL to allow access only from the subnets on which the ECS cluster will generate tasks. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access only from the S3 VPC endpoint.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS security group for MySQL to allow access only from the subnets on which the ECS cluster will generate tasks. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access only from the S3 VPC endpoint.
This approach controls access at the network level by ensuring that the RDS database and S3 bucket are accessible only through the specified VPC endpoints, limiting access to resources within the ECS cluster VPC.
680 # A company has a web application that runs on premises. The app experiences latency issues during peak hours. Latency issues occur twice a month. At the beginning of a latency issue, the application’s CPU utilization immediately increases to 10 times its normal amount. The company wants to migrate the application to AWS to improve latency. The company also wants to scale the app automatically when demand for the app increases. The company will use AWS Elastic Beanstalk for application deployment. What solution will meet these requirements?
A. Configure an Elastic Beanstalk environment to use burst throughput instances in unlimited mode. Configure the environment to scale based on requests.
B. Configure an Elastic Beanstalk environment to use compute-optimized instances. Configure the environment to scale based on requests.
C. Configure an Elastic Beanstalk environment to use compute-optimized instances. Configure the environment to scale on a schedule.
D. Configure an Elastic Beanstalk environment to use burst throughput instances in unlimited mode. Configure the environment to scale on predictive metrics.
D. Configure an Elastic Beanstalk environment to use burst throughput instances in unlimited mode. Configure the environment to scale on predictive metrics.
Predictive scaling works by analyzing historical load data to detect daily or weekly patterns in traffic flows. It uses this information to forecast future capacity needs so Amazon EC2 Auto Scaling can proactively increase the capacity of your Auto Scaling group to match the anticipated load.
Predictive scaling is well suited for situations where you have:
Burst performance instances are designed to handle explosive workloads, and configuring your environment to scale on predictive metrics allows you to proactively scale based on anticipated demand. This aligns well with the requirement to autoscale when CPU utilization increases 10x during latency issues. Therefore, option D is the most suitable solution to improve latency and automatically scale the application during peak hours.
Cyclical traffic, such as high use of resources during regular business hours and low use of resources during evenings and weekends
Recurring on-and-off workload patterns, such as batch processing, testing, or periodic data analysis
Applications that take a long time to initialize, causing a noticeable latency impact on application performance during scale-out events
In general, if you have regular patterns of traffic increases and applications that take a long time to initialize, you should consider using predictive scaling. Predictive scaling can help you scale faster by launching capacity in advance of forecasted load, compared to using only dynamic scaling, which is reactive in nature. Predictive scaling can also potentially save you money on your EC2 bill by helping you avoid the need to over provision capacity.
681 # A company has customers located all over the world. The company wants to use automation to protect its systems and network infrastructure. The company’s security team must be able to track and audit all incremental changes to the infrastructure. What solution will meet these requirements?
A. Use AWS Organizations to configure infrastructure. Use AWS Config to track changes.
B. Use AWS CloudFormation to configure the infrastructure. Use AWS Config to track changes.
C. Use AWS Organizations to configure the infrastructure. Use the AWS Service Catalog to track changes.
D. Use AWS CloudFormation to configure the infrastructure. Use the AWS Service Catalog to track changes.
B. Use AWS CloudFormation to configure the infrastructure. Use AWS Config to track changes
AWS CloudFormation is an infrastructure as code (IaC) service that allows you to define and provision AWS infrastructure. Using CloudFormation ensures automation in infrastructure configuration, and AWS Config can be used to track changes and maintain an inventory of resources.
682 # A startup is hosting a website for its customers on an Amazon EC2 instance. The website consists of a stateless Python application and a MySQL database. The website only serves a small amount of traffic. The company is concerned about instance reliability and needs to migrate to a highly available architecture. The company cannot modify the application code. What combination of actions should a solutions architect take to achieve high availability for the website? (Choose two.)
A. Provision an Internet gateway in each availability zone in use.
B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
C. Migrate the database to Amazon DynamoDB and enable DynamoDB auto-scaling.
D. Use AWS DataSync to synchronize database data across multiple EC2 instances.
E. Create an application load balancer to distribute traffic to an auto-scaling group of EC2 instances that are spread across two availability zones.
B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
E. Create an application load balancer to distribute traffic to an auto-scaling group of EC2 instances that are spread across two availability zones.
Amazon RDS Multi-AZ (Availability Zone) deployments provide high availability for database instances. Automatically replicates the database to a standby instance in a different Availability Zone, ensuring failover in the event of a primary AZ failure.
This option ensures that traffic is distributed across multiple EC2 instances for the website. The combination with an auto-scaling group enables demand-based auto-scaling, providing high availability.
Therefore, options E and B together provide a solution to achieve high availability for the website without modifying the application code.
683 # A company is moving its data and applications to AWS during a multi-year migration project. The company wants to securely access Amazon S3 data from the company’s AWS region and from the company’s on-premises location. The data must not traverse the Internet. Your company has established an AWS Direct Connect connection between your region and your on-premises location. What solution will meet these requirements?
A. Create gateway endpoints for Amazon S3. Use gateway endpoints to securely access the data from the Region and the on-premises location.
B. Create a gateway on AWS Transit Gateway to access Amazon S3 securely from your region and on-premises location.
C. Create interface endpoints for Amazon S3. Use interface endpoints to securely access local location and region data.
D. Use an AWS Key Management Service (AWS KMS) key to securely access data from your region and on-premises location.
C. Create interface endpoints for Amazon S3. Use interface endpoints to securely access local location and region data.
Gateway endpoints do not allow access from on-premises networks, from peered VPCs in other AWS Regions, or through a transit gateway. For those scenarios, you must use an interface endpoint, which is available for an additional cost. https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html
684 # A company created a new organization in AWS Organizations. The organization has multiple accounts for the company’s development teams. Development team members use AWS IAM Identity Center (AWS Single Sign-On) to access accounts. For each of the company’s applications, development teams must use a predefined application name to label the resources that are created. A solutions architect needs to design a solution that gives the development team the ability to create resources only if the application name tag has an approved value. What solution will meet these requirements?
A. Create an IAM group that has a conditional permission policy that requires the application name tag to be specified to create resources.
B. Create a cross-account role that has a deny policy for any resources that have the application name tag.
C. Create a resource group in AWS Resource Groups to validate that the tags are applied to all resources in all accounts.
D. Create a tag policy in Organizations that has a list of allowed application names.
D. Create a tag policy in Organizations that has a list of allowed application names.
AWS Organizations allows you to create tag policies that define which tags should be applied to resources and what values are allowed. This is an effective way to ensure that only approved app names are used as labels. Therefore, option D, creating a tag policy in organizations with a list of allowed application names, is the most appropriate solution to enforce required tag values.
Wrong: A. Create an IAM group that has a conditional permission policy that requires the application name tag to be specified to create resources. While IAM policies may include conditions, they focus more on actions and resources, and may not be best suited to enforce specific tag values.
685 # A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage the user’s master password by rotating the password every 30 days. Which solution will meet these requirements with the LESS operating overhead?
A. Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.
B. Use the modify-db-instance command in the AWS CLI to change the password.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
D. Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password rotation.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
AWS Secrets Manager provides a managed solution for rotating database credentials, including built-in support for Amazon RDS. Enables automatic master user password rotation for RDS for PostgreSQL with minimal operational overhead.
686 # A company tests an application that uses an Amazon DynamoDB table. Testing is done for 4 hours once a week. The company knows how many read and write operations the application performs on the table each second during testing. The company does not currently use DynamoDB for any other use cases. A solutions architect needs to optimize tabletop costs. What solution will meet these requirements?
A. Choose on-demand mode. Update read and write capacity units appropriately.
B. Choose provisioned mode. Update read and write capacity units appropriately.
C. Purchase DynamoDB reserved capacity for a period of 1 year.
D. Purchase DynamoDB reserved capacity for a period of 3 years.
B. Choose provisioned mode. Update read and write capacity units appropriately.
In provisioned capacity mode, you manually provision read and write capacity units based on your known workload. Because the company knows the read and write operations during testing, it can provide the exact capacity needed for those specific periods, optimizing costs by not paying for unused capacity at other times.
On-demand mode in DynamoDB automatically adjusts read and write capacity based on actual usage. However, since the workload is known and occurs during specific periods, the provisioning mode would probably be more cost-effective.
687 # A company runs its applications on Amazon EC2 instances. The company conducts periodic financial evaluations of its AWS costs. The company recently identified an unusual expense. The company needs a solution to avoid unusual expenses. The solution should monitor costs and notify responsible stakeholders in case of unusual expenses. What solution will meet these requirements?
A. Use an AWS budget template to create a zero-spend budget.
B. Create an AWS Cost Anomaly Detection Monitor in the AWS Billing and Cost Management Console.
C. Create AWS Pricing Calculator estimates for current running workload pricing details.
D. Use Amazon CloudWatch to monitor costs and identify unusual expenses.
B. Create an AWS Cost Anomaly Detection Monitor in the AWS Billing and Cost Management Console.
AWS Cost Anomaly Detection uses machine learning to identify unexpected spending patterns and anomalies in your costs. It can automatically detect unusual expenses and send notifications, making it suitable for the described scenario.
688 # A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign. The business needs to analyze clickstream data in Amazon S3 quickly. Next, the business needs to determine whether to process the data further in the data pipeline. Which solution will meet these requirements with the LESS operating overhead?
A. Create external tables in a Spark catalog. Set up jobs in AWS Glue to query the data.
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query data.
C. Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query data.
D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to query data.
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query data.
AWS Glue Crawler can automatically discover and catalog metadata about clickstream data in S3. Amazon Athena, as a serverless query service, allows you to perform fast ad hoc SQL queries on data without needing to configure and manage the infrastructure.
AWS Glue is a fully managed extract, transform, and load (ETL) service, and Athena is a serverless query service that allows you to analyze data directly in Amazon S3 using SQL queries. By configuring an AWS Glue crawler to crawl the data, you can create a schema for the data, and then use Athena (remember, where it lives) to query the data directly without the need to load it into a separate database. This minimizes operational overhead.
689 # A company runs an SMB file server in its data center. The file server stores large files that are frequently accessed by the company for up to 7 days after the file creation date. After 7 days, the company should be able to access the files with a maximum recovery time of 24 hours. What solution will meet these requirements?
A. Use AWS DataSync to copy data older than 7 days from the SMB file server to AWS.
B. Create an Amazon S3 file gateway to increase the company’s storage space. Create an S3 lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
C. Create an Amazon FSx file gateway to increase your company’s storage space. Create an Amazon S3 lifecycle policy to transition data after 7 days.
D. Configure access to Amazon S3 for each user. Create an S3 lifecycle policy to transition data to S3 Glacier Flexible Retrieval after 7 days.
B. Create an Amazon S3 file gateway to increase the company’s storage space. Create an S3 lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
With an S3 file gateway, you can present an S3 bucket as a file share. Using an S3 lifecycle policy to transition data to Glacier Deep Archive after 7 days allows for cost savings, and recovery time is within the specified 24 hours.
690 # A company runs a web application on Amazon EC2 instances in an Auto Scaling group. The application uses a database running on an Amazon RDS for PostgreSQL DB instance. The app runs slowly when traffic increases. The database experiences heavy read load during high traffic periods. What actions should a solutions architect take to resolve these performance issues? (Choose two.) A. Enable auto-scaling for the database instance. B. Create a read replica for the database instance. Configure the application to send read traffic to the read replica. C. Convert the database instance to a Multi-AZ DB instance deployment. Configure the application to send read traffic to the standby database instance. D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster. E. Configure Auto Scaling group subnets to ensure that EC2 instances are provisioned in the same availability zone as the database instance.
B. Create a read replica for the database instance. Configure the application to send read traffic to the read replica.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.
By creating a read replica, you offload read traffic from the primary database instance to the replica, distributing the read load and improving overall performance. This is a common approach to scale out read-heavy database workloads.
Amazon ElastiCache is a managed caching service that can help improve application performance by caching frequently accessed data. Cached query results in ElastiCache can reduce the load on the PostgreSQL database, especially for repeated read queries.
691 # A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an application. The company creates a snapshot of each EBS volume every day to meet compliance requirements. The company wants to implement an architecture that prevents accidental deletion of EBS volume snapshots. The solution should not change the administrative rights of the storage administrator user. Which solution will meet these requirements with the LEAST administrative effort?
A. Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance. Use the AWS CLI from the new EC2 instance to delete snapshots.
B. Create an IAM policy that denies the deletion of snapshots. Attaches the policy to the storage administrator user.
C. Add tags to snapshots. Create Recycle Bin retention rules for EBS snapshots that have the labels.
D. Lock EBS snapshots to prevent deletion.
D. Lock EBS snapshots to prevent deletion.
Amazon EBS provides a built-in feature to lock snapshots, preventing them from being deleted. This is a straightforward and effective solution that does not involve creating additional IAM roles, policies, or tags. It directly addresses the requirement to prevent accidental deletion with minimal administrative effort.
692 # An enterprise application uses network load balancers, auto-scaling groups, Amazon EC2 instances, and databases that are deployed in an Amazon VPC. The company wants to capture information about traffic to and from network interfaces in near real time in its Amazon VPC. The company wants to send the information to Amazon OpenSearch Service for analysis. What solution will meet these requirements?
A. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Streams to stream log group logs to the OpenSearch service.
B. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Firehose to stream log group logs to the OpenSearch service.
C. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Streams to stream trace records to the OpenSearch service.
D. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Firehose to stream the track logs to the OpenSearch service.
B. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Firehose to stream log group logs to the OpenSearch service.
Other answers:
A. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Streams to stream log group logs to the OpenSearch service. This option involves configuring VPC flow logs to capture network traffic information and send the logs to an Amazon CloudWatch log group. Next, it suggests using Amazon Kinesis Data Streams to stream the log group logs to the Amazon OpenSearch service. While this is technically feasible, using Kinesis Data Streams could introduce unnecessary complexity for this use case.
C. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Streams to stream trace records to the OpenSearch service. This option involves using AWS CloudTrail to capture VPC stream logs and then using Amazon Kinesis Data Streams to stream the logs to the Amazon OpenSearch service. However, CloudTrail is typically used to log API activity and may not provide the detailed network traffic information captured by VPC flow logs.
D. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Firehose to stream the track logs to the OpenSearch service. Like option C, this option involves using AWS CloudTrail to capture VPC flow logs, but suggests using Amazon Kinesis Data Firehose instead of Kinesis Data Streams. Again, CloudTrail might not be the optimal choice for capturing detailed information about network traffic.
693 # A company is developing an application that will run on an Amazon Elastic Kubernetes Service (Amazon EKS) production cluster. The EKS cluster has managed groups of nodes that are provisioned with on-demand instances. The company needs a dedicated EKS cluster for development work. The company will use the development cluster infrequently to test the resilience of the application. The EKS cluster must manage all nodes. Which solution will meet these requirements in the MOST cost-effective way?
A. Create a managed node group that contains only spot instances.
B. Create two managed node groups. Provision a group of nodes with on-demand instances. Provision the second node group with spot instances.
C. Create an auto-scaling group that has a startup configuration that uses spot instances. Configure the user details to add the nodes to the EKS cluster.
D. Create a managed node group that contains only on-demand instances.
B. Create two managed node groups. Provision a group of nodes with on-demand instances. Provision the second node group with spot instances.
This option allows the company to have a dedicated EKS cluster for development work. By creating two pools of managed nodes, one using on-demand instances and the other using spot instances, the company can manage costs effectively. On-demand instances provide stability and reliability, which is suitable for development work that requires consistent and predictable performance.
Spot instances offer cost savings, but come with the trade-off of potential short-notice termination. However, for infrequent testing and resilience experiments, one-time instances can be used to optimize costs.
694 # A company stores sensitive data in Amazon S3. A solutions architect needs to create an encryption solution. The enterprise needs to fully control users’ ability to create, rotate, and deactivate encryption keys with minimal effort for any data that needs to be encrypted. What solution will meet these requirements?
A. Use default server-side encryption with Amazon S3 Managed Encryption Keys (SSE-S3) to store sensitive data.
B. Create a customer-managed key using AWS Key Management Service (AWS KMS). Use the new key to encrypt S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
C. Create an AWS managed key using the AWS Key Management Service (AWS KMS). Use the new key to encrypt S3 objects using server-side encryption with AWS KMS keys (SSE-KMS).
D. Download S3 objects to an Amazon EC2 instance. Encrypts objects using customer-managed keys. Upload the encrypted objects back to Amazon S3.
B. Create a customer-managed key using AWS Key Management Service (AWS KMS). Use the new key to encrypt S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
AWS KMS allows you to create and manage customer managed keys (CMKs), giving you full control over the key lifecycle. This includes the ability to create, rotate, and deactivate keys as needed. Using server-side encryption with AWS KMS keys (SSE-KMS) ensures that S3 objects are encrypted with the specified customer-managed key. This provides a secure, managed approach to encrypt sensitive data in Amazon S3.
695 # A company wants to backup its on-premises virtual machines (VMs) to AWS. The company’s backup solution exports local backups to an Amazon S3 bucket as objects. S3 backups should be retained for 30 days and should be automatically deleted after 30 days. What combination of steps will meet these requirements? (Choose three.)
A. Create an S3 bucket that has S3 object locking enabled.
B. Create an S3 bucket that has object versioning enabled.
C. Set a default retention period of 30 days for the objects.
D. Configure an S3 lifecycle policy to protect objects for 30 days.
E. Configure an S3 lifecycle policy to expire the objects after 30 days.
F. Configure the backup solution to tag objects with a 30-day retention period
A. Create an S3 bucket that has S3 object locking enabled.
C. Set a default retention period of 30 days for the objects.
E. Configure an S3 lifecycle policy to expire the objects after 30 days.
S3 object locking ensures that objects in the bucket cannot be deleted or modified for a specified retention period. This helps meet the requirement to retain backups for 30 days.
Set a default retention period on the S3 bucket, specifying that objects within the bucket are locked for a duration of 30 days. This enforces the retention policy on the objects.
Use an S3 lifecycle policy to automatically expire (delete) objects in the S3 bucket after the specified 30-day retention period. This ensures that backups are automatically deleted after the retention period.
696 # A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon EFS) file system and to another S3 bucket. Files must be copied continuously. New files are added to the original S3 bucket consistently. Copied files should be overwritten only if the source file changes. Which solution will meet these requirements with the LESS operating overhead?
A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the target S3 bucket and EFS file system. Set the transfer mode to transfer only the data that has changed.
B. Create an AWS Lambda function. Mount the file system in the function. Configure an S3 event notification to invoke the function when files are created and changed in Amazon S3. Configure the function to copy files to the file system and the destination S3 bucket.
C. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the target S3 bucket and EFS file system. Set the transfer mode to transfer all data.
D. Start an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create a script to routinely synchronize all objects that changed in the source S3 bucket with the destination S3 bucket and the mounted file system.
A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the target S3 bucket and EFS file system. Set the transfer mode to transfer only the data that has changed.
AWS DataSync is a managed service that makes it easy to automate, accelerate, and simplify data transfers between on-premises storage systems and AWS storage services. By configuring the transfer mode to transfer only data that has changed, the solution ensures that only changed data is transferred, reducing operational overhead.
697 # A company uses Amazon EC2 instances and stores data on Amazon Elastic Block Store (Amazon EBS) volumes. The company must ensure that all data is encrypted at rest using AWS Key Management Service (AWS KMS). The company must be able to control the rotation of encryption keys. Which solution will meet these requirements with the LESS operating overhead?
A. Create a customer-managed key. Use the key to encrypt EBS volumes.
B. Use an AWS managed key to encrypt EBS volumes. Use the key to set automatic key rotation.
C. Create an external KMS key with imported key material. Use the key to encrypt EBS volumes.
D. Use an AWS-owned key to encrypt EBS volumes.
A. Create a customer-managed key. Use the key to encrypt EBS volumes.
By creating a customer-managed key in AWS Key Management Service (AWS KMS), your business gains control over key rotation and can manage key policies. This enables encryption of EBS volumes with a key that the enterprise can rotate as needed, providing flexibility and control with minimal operational overhead.
698 # An enterprise needs a solution to enforce encryption of data at rest on Amazon EC2 instances. The solution should automatically identify non-compliant resources and enforce compliance policies based on the findings. Which solution will meet these requirements with the LEAST administrative overhead?
A. Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of unencrypted EBS volumes.
B. Use AWS Key Management Service (AWS KMS) to manage access to encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Lambda and Amazon EventBridge to automate the detection and remediation of unencrypted EBS volumes.
C. Use Amazon Macie to discover unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager automation rules to automatically encrypt existing and new EBS volumes.
D. Use the Amazon Inspector to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager automation rules to automatically encrypt existing and new EBS volumes.
A. Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of unencrypted EBS volumes.
IAM policies can help control the creation of encrypted EBS volumes, and combining them with AWS Config and Systems Manager provides a comprehensive solution for detection and remediation.
699 # A company wants to migrate its on premises web applications to AWS. The company is located close the eu-central-1 region. Because of regulations, the company cannot launch some of its applications on eu-central-1. The company wants to achieve single-digit millisecond latency. What solution will meet these requirements?
A. Deploy the applications to eu-central-1. Extend the enterprise VPC from eu-central-1 to an edge location on Amazon CloudFront.
B. Deploy the applications in AWS local zones by extending the company’s VPC from eu-central-1 to the chosen local zone.
C. Deploy the applications to eu-central-1. Extend the eu-central-1 enterprise VPC to regional edge caches on Amazon CloudFront.
D. Deploy applications to AWS wavelength zones by extending the eu-central-1 enterprise VPC to the chosen wavelength zone.
B. Deploy the applications in AWS local zones by extending the company’s VPC from eu-central-1 to the chosen local zone.
AWS Local Zones provide low-latency access to AWS services in specific geographic locations. Deploying applications to local zones can deliver the desired low-latency experience.
700 # A company is migrating its on-premises multi-tier application to AWS. The application consists of a single-node MySQL database and a multi-node web tier. The company should minimize changes to the application during the migration. The company wants to improve the resilience of applications after migration. What combination of steps will meet these requirements? (Choose two.)
A. Migrate the web tier to Amazon EC2 instances in an auto-scaling group behind an application load balancer.
B. Migrate the database to Amazon EC2 instances in an auto-scaling group behind a network load balancer.
C. Migrate the database to an Amazon RDS Multi-AZ deployment.
D. Migrate the web tier to an AWS Lambda function.
E. Migrate the database to an Amazon DynamoDB table.
A. Migrate the web tier to Amazon EC2 instances in an auto-scaling group behind an application load balancer.
C. Migrate the database to an Amazon RDS Multi-AZ deployment.
701 # A company’s e-commerce website has unpredictable traffic and uses AWS Lambda functions to directly access a private Amazon RDS for PostgreSQL DB instance. The company wants to maintain predictable database performance and ensure that Lambda invocations do not overload the database with too many connections. What should a solutions architect do to meet these requirements?
A. Point the client controller to a custom RDS endpoint. Deploy Lambda functions within a VPC.
B. Point the client driver to an RDS proxy endpoint. Deploy Lambda functions within a VPC.
C. Point the client controller to a custom RDS endpoint. Deploy Lambda functions outside of a VPC.
D. Point the client controller to an RDS proxy endpoint. Deploy Lambda functions outside of a VPC.
B. Point the client driver to an RDS proxy endpoint. Deploy Lambda functions within a VPC.
RDS is in the private subnet. So, to connect to RDS the proxy needs to be in the same VPC https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
702 # A company is creating an application. The company stores application testing data in multiple local locations. Your business needs to connect on-premises locations to VPCs in an AWS Region in the AWS Cloud. The number of accounts and VPCs will increase over the next year. The network architecture must simplify the management of new connections and must provide the ability to scale. Which solution will meet these requirements with the LEAST administrative overhead?
A. Create a peering connection between the VPCs. Create a VPN connection between VPCs and on-premises locations.
B. Start an Amazon EC2 instance. On your instance, include VPN software that uses a VPN connection to connect all VPCs and on-premises locations.
C. Create a transit gateway. Create VPC attachments for VPC connections. Create VPN attachments for on-premises connections.
D. Create an AWS Direct Connect connection between on-premises locations and a central VPC. Connect the core VPC to other VPCs by using peering connections.
C. Create a transit gateway. Create VPC attachments for VPC connections. Create VPN attachments for on-premises connections.
703 # A company using AWS needs a solution to predict the resources needed for manufacturing processes each month. The solution must use historical values that are currently stored in an Amazon S3 bucket. The company has no experience in machine learning (ML) and wants to use a managed service for training and predictions. What combination of steps will meet these requirements? (Choose two.)
A. Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference.
B. Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.
C. Configure an AWS Lambda function with a function URL that uses Amazon SageMaker endpoints to create predictions based on the inputs.
D. Configure an AWS Lambda function with a function URL that uses an Amazon forecast predictor to create a prediction based on the inputs.
E. Train an Amazon Forsecast predictor using historical data from the S3 bucket.
D. Configure an AWS Lambda function with a function URL that uses an Amazon forecast predictor to create a prediction based on the inputs.
E. Train an Amazon Forsecast predictor using historical data from the S3 bucket.
E: Amazon Forecast is a fully managed service that uses machine learning (ML) to generate highly accurate forecasts without requiring any prior ML experience. Forecast is applicable in a wide variety of use cases, including estimating product demand, energy demand, workforce planning, computing cloud infrastructure usage, traffic demand, supply chain optimization, and financial planning. D: Publish demand using AWS Lambda, AWS Step Functions, and Amazon CloudWatch Events rule to periodically (hourly) query the database and write the past X-months (count from the current timestamp) demand data into the source Amazon S3. https://aws.amazon.com/blogs/machine-learning/automating-your-amazon-forecast-workflow-with-lambda-step-functions-and-cloudwatch-events-rule/
“Alternatively, if you are looking for a fully managed service to deliver highly accurate forecasts, without writing code, we recommend checking out Amazon Forecast. Amazon Forecast is a time-series forecasting service based on machine learning (ML) and built for business metrics analysis.” https://aws.amazon.com/blogs/machine-learning/deep-demand-forecasting-with-amazon-sagemaker/
A. Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference.
B. Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.
Amazon SageMaker is a fully managed service that allows you to build, train, and deploy machine learning (ML) models. To predict the resources needed for manufacturing processes based on historical data, you can use SageMaker to train a model.
Once the model is trained, you can deploy it using SageMaker, creating an endpoint for inference. This endpoint can be used to make predictions based on new data.
704 # A company manages AWS accounts in AWS organizations. AWS IAM (AWS Single Sign-On) Identity Center and AWS Control Tower are configured for the accounts. The company wants to manage multiple user permissions across all accounts. Permissions will be used by multiple IAM users and should be split between the developer and administrator teams. Each team requires different permissions. The company wants a solution that includes new users who are hired on both teams. Which solution will meet these requirements with the LESS operating overhead?
A. Create individual users in the IAM Identity Center for each account. Create separate developer and administrator groups in IAM Identity Center. Assign users to the appropriate groups. Create a custom IAM policy for each group to set detailed permissions.
B. Create individual users in the IAM Identity Center for each account. Create separate developer and administrator groups in IAM Identity Center. Assign users to the appropriate groups. Attach AWS managed IAM policies to each user as needed for fine-grained permissions.
C. Create individual users in the IAM Identity Center. Create new developer and administrator groups in the IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each group. Assign the new groups to the appropriate accounts. Assign the new permission sets to the new groups. When new users are hired, add them to the appropriate group.
D. Create individual users in the IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each user. Assign users to the appropriate accounts. Grant additional IAM permissions to users from specific accounts. When new users are hired, add them to the IAM Identity Center and assign them to accounts.
C. Create individual users in the IAM Identity Center. Create new developer and administrator groups in the IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each group. Assign the new groups to the appropriate accounts. Assign the new permission sets to the new groups. When new users are hired, add them to the appropriate group.
The AWS Identity and Access Management (IAM) Identity Hub (AWS SSO) provides a centralized place to manage access to multiple AWS accounts. In this scenario, creating separate groups for developers and administrators in IAM Identity Center allows for easier management of permissions. By creating new permission sets that include the appropriate IAM policies for each group, you can assign these permission sets to the respective groups. This approach provides a simplified way to manage permissions for developer and administrator teams. When new users are hired, you can add them to the appropriate group, and they automatically inherit the permissions associated with that group. This reduces operational overhead when onboarding new users, ensuring they get the necessary permissions based on their roles.
705 # A company wants to standardize its encryption strategy by Amazon Elastic Block Store (Amazon EBS) volume. The company also wants to minimize the cost and configuration effort required to operate bulk encryption verification. What solution will meet these requirements?
A. Write API calls to describe the EBS volumes and to confirm that the EBS volumes are encrypted. Use Amazon EventBridge to schedule an AWS Lambda function to execute API calls.
B. Write API calls to describe the EBS volumes and to confirm that the EBS volumes are encrypted. Run the API calls in an AWS Fargate task.
C. Create an AWS Identity and Access Management (IAM) policy that requires the use of tags on EBS volumes. Use AWS Cost Explorer to display resources that are not tagged correctly. Encrypt untagged resources manually.
D. Create an AWS configuration rule for Amazon EBS to evaluate whether a volume is encrypted and to flag the volume if it is not encrypted.
D. Create an AWS configuration rule for Amazon EBS to evaluate whether a volume is encrypted and to flag the volume if it is not encrypted.
AWS Config allows you to create rules that automatically check whether your AWS resources meet your desired configurations. In this scenario, you want to standardize your Amazon Elastic Block Store (Amazon EBS) volume encryption strategy and minimize the configuration cost and effort to operate volume encryption verification.
By creating an AWS configuration rule specifically for Amazon EBS to evaluate whether a volume is encrypted, you can automate the process of verifying and flagging non-compliant resources. This solution is cost-effective because AWS Config provides a managed service for configuration compliance.
706 # A company regularly uploads GB-sized files to Amazon S3. After the company uploads the files, the company uses a fleet of Amazon EC2 spot instances to transcode the file format. The business needs to scale performance when the business uploads data from the on-premises data center to Amazon S3 and when the business downloads data from Amazon S3 to EC2 instances. What solutions will meet these requirements? (Choose two.)
A. Use the access point to the S3 bucket instead of accessing the S3 bucket directly.
B. Upload the files to multiple S3 buckets.
C. Use S3 multipart uploads.
D. Fetch multiple byte-ranges from an object in parallel.
E. Add a random prefix to each object when uploading files.
C. Use S3 multipart uploads.
D. Fetch multiple byte-ranges from an object in parallel.
S3 multipart uploads allow parallel uploading of parts of a large object, improving performance during the upload process. This is particularly beneficial for large files as it allows simultaneous uploads of different parts, improving overall upload performance. Multipart loads are well suited for scaling performance during data loads.
Fetching multiple byte ranges in parallel is a strategy to improve download performance. By making simultaneous requests for different parts or ranges of an object, you can efficiently use available bandwidth and reduce the time required to download large files. This approach aligns with the goal of scaling performance during data downloads.
707 # A solutions architect is designing a shared storage solution for a web application that is deployed across multiple availability zones. The web application runs on Amazon EC2 instances that are in an Auto Scaling group. The company plans to make frequent changes to the content. The solution should have great consistency in returning new content as soon as changes occur. What solutions meet these requirements? (Choose two.)
A. Use the AWS Storage Gateway Volume Gateway Internet Small Computer Systems Interface (iSCSI) block storage that is mounted on the individual EC2 instances.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the individual EC2 instances.
C. Create an Amazon Elastic Block Store (Amazon EBS) shared volume. Mount the EBS volume on the individual EC2 instances.
D. Use AWS DataSync to perform continuous data synchronization between EC2 hosts in the Auto Scaling group.
E. Create an Amazon S3 bucket to store web content. Set the Cache-Control header metadata to no-cache. Use Amazon CloudFront to deliver content.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the individual EC2 instances.
E. Create an Amazon S3 bucket to store web content. Set the metadata for the Cache-Control header to no-cache. Use Amazon CloudFront to deliver content.
Amazon EFS is a scalable, shared file storage service. It supports NFS and allows multiple EC2 instances to access the same file system at the same time. This option is suitable for achieving strong consistency and sharing content between instances, making it a good choice for web applications deployed across multiple availability zones.
Amazon S3 can be used as a highly available and scalable storage solution. However, when immediate consistency is crucial, S3 may have potential consistency delays. Setting the cache control header to no cache helps minimize caching, but might not ensure strong consistency for immediate content updates.
In summary, options B (Amazon EFS) and E (Amazon S3 with CloudFront) are more aligned with the goal of achieving strong consistency and sharing content between multiple instances in an Auto Scaling group. Among them, Amazon EFS is a dedicated file storage service designed for this purpose and is often a suitable choice for shared storage in distributed environments.
708 # A company is deploying an application to three AWS regions using an application load balancer. Amazon Route 53 will be used to distribute traffic between these regions. Which Route 53 configuration should a solutions architect use to provide the highest performing experience?
A. Create an A record with a latency policy.
B. Create an A record with a geolocation policy.
C. Create a CNAME record with a failover policy.
D. Create a CNAME record with a geoproximity policy.
A. Create an A record with a latency policy.
709 # A company has a web application that includes an embedded NoSQL database. The application runs on Amazon EC2 instances behind an application load balancer (ALB). Instances run in an Amazon EC2 auto-scaling group in a single availability zone. A recent increase in traffic requires the application to be highly available and the database to finally be consistent. Which solution will meet these requirements with the LESS operating overhead?
A. Replace the ALB with a network load balancer. Keep the NoSQL database integrated with your replication service on EC2 instances.
B. Replace the ALB with a network load balancer. Migrate the integrated NoSQL database to Amazon DynamoDB using the AWS Database Migration Service (AWS DMS).
C. Modify the Auto Scaling group to use EC2 instances in three availability zones. Keep the NoSQL database integrated with your replication service on EC2 instances.
D. Modify the Auto Scaling group to use EC2 instances across three availability zones. Migrate the embedded NoSQL database to Amazon DynamoDB by using the AWS Database Migration Service (AWS DMS).
D. Modify the Auto Scaling group to use EC2 instances across three availability zones. Migrate the embedded NoSQL database to Amazon DynamoDB by using the AWS Database Migration Service (AWS DMS).
Option D (Modify the auto-scaling group to use EC2 instances in three availability zones and migrate the integrated NoSQL database to Amazon DynamoDB) provides high availability, scalability, and reduces operational overhead by leveraging a managed service like DynamoDB. It aligns well with the requirements of a highly available system and is ultimately consistent with the least operational overhead.
NOTE: C. Modify the auto-scaling group to use EC2 instances in three availability zones. Keep the NoSQL database integrated with your replication service on EC2 instances.
Explanation: Scaling across three availability zones can improve availability, but maintaining the NoSQL database integrated with the replication service on EC2 instances still involves operational complexity.
710 # A company is building a shopping application on AWS. The application offers a catalog that changes once a month and needs to scale with traffic volume. The company wants the lowest possible latency of the application. Each user’s shopping cart data must be widely available. The user’s session data must be available even if the user is offline and logs back in. What should a solutions architect do to ensure that shopping cart data is preserved at all times?
A. Configure an application load balancer to enable the sticky sessions (session affinity) feature to access the catalog in Amazon Aurora.
B. Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and shopping cart data from the user’s session.
C. Configure the Amazon OpenSearch service to cache Amazon DynamoDB catalog data and shopping cart data from the user session.
D. Configure an Amazon EC2 instance with Amazon Elastic Block Store (Amazon EBS) storage for the catalog and shopping cart. Set up automated snapshots.
B. Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and shopping cart data from the user’s session.
Amazon ElastiCache for Redis is a managed caching service that can be used to cache frequently accessed data. You can improve performance and help preserve shopping cart data by storing it in Redis, which is an in-memory data store.
711 # A company is building a microservices-based application to be deployed to Amazon Elastic Kubernetes Service (Amazon EKS). The microservices will interact with each other. The company wants to ensure that the application is observable to identify performance issues in the future. What solution will meet these requirements?
A. Configure the application to use Amazon ElastiCache to reduce the number of requests that are sent to the microservices.
B. Configure Amazon CloudWatch Container Insights to collect metrics from EKS clusters. Configure AWS X-Ray to trace the requests between microservices.
C. Configure AWS CloudTrail to review API calls. Create an Amazon QuickSight dashboard to observe microservice interactions.
D. Use AWS Trusted Advisor to understand application performance.
B. Configure Amazon CloudWatch Container Insights to collect metrics from EKS clusters. Configure AWS X-Ray to trace the requests between microservices.
Amazon CloudWatch Container Insights provides monitoring and observability for containerized applications. Collects metrics from EKS clusters, providing information on resource utilization and application performance. AWS X-Ray, on the other hand, helps track requests as they flow through different microservices, helping to identify bottlenecks and performance issues.
712 # A company needs to provide customers with secure access to their data. The company processes customer data and stores the results in an Amazon S3 bucket. All data is subject to strict regulations and security requirements. Data must be encrypted at rest. Each customer should be able to access their data only from their AWS account. Company employees must not be able to access the data. What solution will meet these requirements?
A. Provide an AWS Certificate Manager (ACM) certificate for each client. Encrypt data on the client side. In the private certificate policy, deny access to the certificate for all directors except for a customer-provided IAM role.
B. Provision a separate AWS Key Management Service (AWS KMS) key for each client. Encrypt data server side. In the S3 bucket policy, deny decryption of data for all principals except a customer-provided IAM role.
C. Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt data server side. In each KMS key policy, deny decryption of data for all principals except an IAM role that the customer provides.
D. Provide an AWS Certificate Manager (ACM) certificate for each client. Encrypt data on the client side. In the public certificate policy, deny access to the certificate for all principals except for an IAM role that the customer provides.
C. Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt data server side. In each KMS key policy, deny decryption of data for all principals except an IAM role that the customer provides.
With separate KMS keys for each client and access control through KMS key policies, you can achieve the desired level of security. This allows you to explicitly deny decryption for unauthorized IAM roles.
NOTE: B. Provide a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt data server side. In the S3 bucket policy, deny data decryption for all principals except a customer-provided IAM role.– This option may not be effective because denying decryption in the S3 bucket policy might not override the key policy. It could grant access to unauthorized parties.