sthithapragnakk -- SAA Exam Dumps Jan 24 101-200 Flashcards

1
Q

673 # A company has multiple AWS accounts in an organization in AWS organizations that use different business units. The company has several offices around the world. The company needs to update the security group rules to allow new office CIDR ranges or to remove old CIDR ranges across the organization. The company wants to centralize the management of security group rules to minimize the administrative overhead required by updating CIDR ranges. Which solution will meet these requirements in the MOST cost-effective way?

A. Create VPC security groups in the organization’s management account. Update security groups when a CIDR range update is necessary.
B. Create a VPC customer-managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across your organization. Use the prefix list in the security groups throughout your organization.
C. Create a list of prefixes managed by AWS. Use an AWS Security Hub policy to enforce security group updates across your organization. Use an AWS Lambda function to update the prefix list automatically when CIDR ranges change.
D. Create security groups in a central AWS administrative account. Create a common AWS Firewall Manager security group policy for your entire organization. Select the previously created security groups as primary groups in the policy.

A

B. Create a VPC customer-managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across your organization. Use the prefix list in the security groups throughout your organization.

A VPC customer-managed prefix list allows you to define a list of CIDR ranges that can be shared across AWS accounts and Regions. This provides a centralized way to manage CIDR ranges. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across your organization: AWS RAM allows you to share resources between AWS accounts, including prefix lists. By sharing the list of prefixes managed by the client, the management of CIDR ranges is centralized. Use the prefix list in organization-wide security groups: You can reference the shared prefix list in security group rules. This ensures that security groups in multiple AWS accounts use the same centralized set of CIDR ranges. This approach minimizes administrative overhead, enables centralized control, and provides a scalable solution for managing security group rules globally.

https://docs.aws.amazon.com/vpc/latest/userguide/managed-prefix-lists.htm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

674 # A company uses an on-premises network attached storage (NAS) system to provide file shares to its high-performance computing (HPC) workloads. The company wants to migrate its latency-sensitive HPC workloads and their storage to the AWS cloud. The enterprise must be able to provide NFS and SMB multiprotocol access from the file system. Which solution will meet these requirements with the lowest latency? (Choose two.)

A. Deploy compute-optimized EC2 instances in a cluster placement group.
B. Deploy compute-optimized EC2 instances in a partition placement group.
C. Attach the EC2 instances to an Amazon FSx file system for Luster.
D. Connect the EC2 instances to an Amazon FSx file system for OpenZFS.
E. Connect the EC2 instances to an Amazon FSx for NetApp ONTAP file system.

A

A. Deploy compute-optimized EC2 instances in a cluster placement group.
E. Connect the EC2 instances to an Amazon FSx for NetApp ONTAP file system.

https://aws.amazon.com/fsx/when-to-choose-fsx/

Cluster placement groups allow you to group instances within a single availability zone to provide low-latency network performance. This is suitable for tightly coupled HPC workloads.

Amazon FSx for NetApp ONTAP supports both NFS and SMB protocols, making it suitable for multi-protocol access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

675 # A company is relocating its data center and wants to securely transfer 50TB of data to AWS within 2 weeks. The existing data center has a site-to-site VPN connection to AWS that is 90% utilized. Which AWS service should a solutions architect use to meet these requirements?

A. AWS DataSync with a VPC endpoint
B. AWS Direct Connect
C. Optimized AWS Snowball Edge Storage
D. AWS Storage Gateway

A

C. Optimized AWS Snowball Edge Storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

676 # A company hosts an application on Amazon EC2 on-demand instances in an auto-scaling group. The app’s peak times occur at the same time every day. App users report slow app performance at the start of peak times. The app normally works 2-3 hours after peak hours start. The company wants to make sure the app works properly at the start of peak times. What solution will meet these requirements?

A. Configure an application load balancer to correctly distribute traffic to the instances.
B. Configure a dynamic scaling policy so that the Auto Scaling group launches new instances based on memory utilization.
C. Configure a dynamic scaling policy for the Auto Scaling group to start new instances based on CPU utilization.
D. Configure a scheduled scaling policy so that the auto-scaling group starts new instances before peak times.

A

D. Configure a scheduled scaling policy so that the auto-scaling group starts new instances before peak times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

677 # A company runs applications on AWS that connect to the company’s Amazon RDS database. Applications scale on weekends and during peak times of the year. The company wants to scale the database more effectively for its applications that connect to the database. Which solution will meet these requirements with the LESS operating overhead?

A. Use Amazon DynamoDB with connection pooling with a target pool configuration for the database. Switch applications to use the DynamoDB endpoint.
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS proxy endpoint.
C. Use a custom proxy running on Amazon EC2 as the database broker. Change applications to use the custom proxy endpoint.
D. Use an AWS Lambda function to provide a connection pool with a target pool configuration for the database. Change applications to use the Lambda function.

A

B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS proxy endpoint.

Amazon RDS Proxy is a managed database proxy that provides connection pooling, failover, and security features for database applications. It allows applications to scale more effectively and efficiently by managing database connections on their behalf. It integrates well with RDS and reduces operating expenses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

678 # A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block Store (Amazon EBS) storage and snapshot costs are increasing every month. However, the company does not purchase additional storage from EBS every month. The company wants to optimize monthly costs for its current storage usage. Which solution will meet these requirements with the LESS operating overhead?

A. Use Amazon CloudWatch Logs to monitor Amazon EBS storage utilization. Use Amazon EBS Elastic Volumes to reduce the size of EBS volumes.
B. Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of EBS volumes.
C. Delete all expired and unused snapshots to reduce snapshot costs.
D. Delete all non-essential snapshots. Use Amazon Data Lifecycle Manager to create and manage snapshots according to your company’s snapshot policy requirements.

A

D. Delete all non-essential snapshots. Use Amazon Data Lifecycle Manager to create and manage snapshots according to your company’s snapshot policy requirements.

Delete all nonessential snapshots: This reduces costs by eliminating unnecessary snapshot storage. Use Amazon Data Lifecycle Manager (DLM): DLM can automate the creation and deletion of snapshots based on defined policies. This reduces operational overhead by automating snapshot management according to the company’s snapshot policy requirements.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

679 # A company is developing a new application on AWS. The application consists of an Amazon Elastic Container Service (Amazon ECS) cluster, an Amazon S3 bucket that contains assets for the application, and an Amazon RDS for MySQL database that contains the application’s data set. The data set contains sensitive information. The company wants to ensure that only the ECS cluster can access data in the RDS for MySQL database and data in the S3 bucket. What solution will meet these requirements?

A. Create a new AWS Key Management Service (AWS KMS) customer-managed key to encrypt both the S3 bucket and the RDS database for MySQL. Ensure that the KMS key policy includes encryption and decryption permissions for the ECS task execution role.
B. Create an AWS Key Management Service (AWS KMS) AWS Managed Key to encrypt both the S3 bucket and the RDS database for MySQL. Ensure that the S3 bucket policy specifies the ECS task execution role as user.
C. Create an S3 bucket policy that restricts bucket access to the ECS task execution role. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS security group for MySQL to allow access only from the subnets on which the ECS cluster will generate tasks.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS security group for MySQL to allow access only from the subnets on which the ECS cluster will generate tasks. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access only from the S3 VPC endpoint.

A

D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS security group for MySQL to allow access only from the subnets on which the ECS cluster will generate tasks. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access only from the S3 VPC endpoint.

This approach controls access at the network level by ensuring that the RDS database and S3 bucket are accessible only through the specified VPC endpoints, limiting access to resources within the ECS cluster VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

680 # A company has a web application that runs on premises. The app experiences latency issues during peak hours. Latency issues occur twice a month. At the beginning of a latency issue, the application’s CPU utilization immediately increases to 10 times its normal amount. The company wants to migrate the application to AWS to improve latency. The company also wants to scale the app automatically when demand for the app increases. The company will use AWS Elastic Beanstalk for application deployment. What solution will meet these requirements?

A. Configure an Elastic Beanstalk environment to use burst throughput instances in unlimited mode. Configure the environment to scale based on requests.
B. Configure an Elastic Beanstalk environment to use compute-optimized instances. Configure the environment to scale based on requests.
C. Configure an Elastic Beanstalk environment to use compute-optimized instances. Configure the environment to scale on a schedule.
D. Configure an Elastic Beanstalk environment to use burst throughput instances in unlimited mode. Configure the environment to scale on predictive metrics.

A

D. Configure an Elastic Beanstalk environment to use burst throughput instances in unlimited mode. Configure the environment to scale on predictive metrics.

Predictive scaling works by analyzing historical load data to detect daily or weekly patterns in traffic flows. It uses this information to forecast future capacity needs so Amazon EC2 Auto Scaling can proactively increase the capacity of your Auto Scaling group to match the anticipated load.

Predictive scaling is well suited for situations where you have:

Burst performance instances are designed to handle explosive workloads, and configuring your environment to scale on predictive metrics allows you to proactively scale based on anticipated demand. This aligns well with the requirement to autoscale when CPU utilization increases 10x during latency issues. Therefore, option D is the most suitable solution to improve latency and automatically scale the application during peak hours.

Cyclical traffic, such as high use of resources during regular business hours and low use of resources during evenings and weekends
Recurring on-and-off workload patterns, such as batch processing, testing, or periodic data analysis
Applications that take a long time to initialize, causing a noticeable latency impact on application performance during scale-out events
In general, if you have regular patterns of traffic increases and applications that take a long time to initialize, you should consider using predictive scaling. Predictive scaling can help you scale faster by launching capacity in advance of forecasted load, compared to using only dynamic scaling, which is reactive in nature. Predictive scaling can also potentially save you money on your EC2 bill by helping you avoid the need to over provision capacity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

681 # A company has customers located all over the world. The company wants to use automation to protect its systems and network infrastructure. The company’s security team must be able to track and audit all incremental changes to the infrastructure. What solution will meet these requirements?

A. Use AWS Organizations to configure infrastructure. Use AWS Config to track changes.
B. Use AWS CloudFormation to configure the infrastructure. Use AWS Config to track changes.
C. Use AWS Organizations to configure the infrastructure. Use the AWS Service Catalog to track changes.
D. Use AWS CloudFormation to configure the infrastructure. Use the AWS Service Catalog to track changes.

A

B. Use AWS CloudFormation to configure the infrastructure. Use AWS Config to track changes

AWS CloudFormation is an infrastructure as code (IaC) service that allows you to define and provision AWS infrastructure. Using CloudFormation ensures automation in infrastructure configuration, and AWS Config can be used to track changes and maintain an inventory of resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

682 # A startup is hosting a website for its customers on an Amazon EC2 instance. The website consists of a stateless Python application and a MySQL database. The website only serves a small amount of traffic. The company is concerned about instance reliability and needs to migrate to a highly available architecture. The company cannot modify the application code. What combination of actions should a solutions architect take to achieve high availability for the website? (Choose two.)

A. Provision an Internet gateway in each availability zone in use.
B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
C. Migrate the database to Amazon DynamoDB and enable DynamoDB auto-scaling.
D. Use AWS DataSync to synchronize database data across multiple EC2 instances.
E. Create an application load balancer to distribute traffic to an auto-scaling group of EC2 instances that are spread across two availability zones.

A

B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
E. Create an application load balancer to distribute traffic to an auto-scaling group of EC2 instances that are spread across two availability zones.

Amazon RDS Multi-AZ (Availability Zone) deployments provide high availability for database instances. Automatically replicates the database to a standby instance in a different Availability Zone, ensuring failover in the event of a primary AZ failure.

This option ensures that traffic is distributed across multiple EC2 instances for the website. The combination with an auto-scaling group enables demand-based auto-scaling, providing high availability.

Therefore, options E and B together provide a solution to achieve high availability for the website without modifying the application code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

683 # A company is moving its data and applications to AWS during a multi-year migration project. The company wants to securely access Amazon S3 data from the company’s AWS region and from the company’s on-premises location. The data must not traverse the Internet. Your company has established an AWS Direct Connect connection between your region and your on-premises location. What solution will meet these requirements?

A. Create gateway endpoints for Amazon S3. Use gateway endpoints to securely access the data from the Region and the on-premises location.
B. Create a gateway on AWS Transit Gateway to access Amazon S3 securely from your region and on-premises location.
C. Create interface endpoints for Amazon S3. Use interface endpoints to securely access local location and region data.
D. Use an AWS Key Management Service (AWS KMS) key to securely access data from your region and on-premises location.

A

C. Create interface endpoints for Amazon S3. Use interface endpoints to securely access local location and region data.

Gateway endpoints do not allow access from on-premises networks, from peered VPCs in other AWS Regions, or through a transit gateway. For those scenarios, you must use an interface endpoint, which is available for an additional cost. https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

684 # A company created a new organization in AWS Organizations. The organization has multiple accounts for the company’s development teams. Development team members use AWS IAM Identity Center (AWS Single Sign-On) to access accounts. For each of the company’s applications, development teams must use a predefined application name to label the resources that are created. A solutions architect needs to design a solution that gives the development team the ability to create resources only if the application name tag has an approved value. What solution will meet these requirements?

A. Create an IAM group that has a conditional permission policy that requires the application name tag to be specified to create resources.
B. Create a cross-account role that has a deny policy for any resources that have the application name tag.
C. Create a resource group in AWS Resource Groups to validate that the tags are applied to all resources in all accounts.
D. Create a tag policy in Organizations that has a list of allowed application names.

A

D. Create a tag policy in Organizations that has a list of allowed application names.

AWS Organizations allows you to create tag policies that define which tags should be applied to resources and what values ​​are allowed. This is an effective way to ensure that only approved app names are used as labels. Therefore, option D, creating a tag policy in organizations with a list of allowed application names, is the most appropriate solution to enforce required tag values.

Wrong: A. Create an IAM group that has a conditional permission policy that requires the application name tag to be specified to create resources. While IAM policies may include conditions, they focus more on actions and resources, and may not be best suited to enforce specific tag values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

685 # A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage the user’s master password by rotating the password every 30 days. Which solution will meet these requirements with the LESS operating overhead?

A. Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.
B. Use the modify-db-instance command in the AWS CLI to change the password.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
D. Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password rotation.

A

C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.

AWS Secrets Manager provides a managed solution for rotating database credentials, including built-in support for Amazon RDS. Enables automatic master user password rotation for RDS for PostgreSQL with minimal operational overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

686 # A company tests an application that uses an Amazon DynamoDB table. Testing is done for 4 hours once a week. The company knows how many read and write operations the application performs on the table each second during testing. The company does not currently use DynamoDB for any other use cases. A solutions architect needs to optimize tabletop costs. What solution will meet these requirements?

A. Choose on-demand mode. Update read and write capacity units appropriately.
B. Choose provisioned mode. Update read and write capacity units appropriately.
C. Purchase DynamoDB reserved capacity for a period of 1 year.
D. Purchase DynamoDB reserved capacity for a period of 3 years.

A

B. Choose provisioned mode. Update read and write capacity units appropriately.

In provisioned capacity mode, you manually provision read and write capacity units based on your known workload. Because the company knows the read and write operations during testing, it can provide the exact capacity needed for those specific periods, optimizing costs by not paying for unused capacity at other times.

On-demand mode in DynamoDB automatically adjusts read and write capacity based on actual usage. However, since the workload is known and occurs during specific periods, the provisioning mode would probably be more cost-effective.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

687 # A company runs its applications on Amazon EC2 instances. The company conducts periodic financial evaluations of its AWS costs. The company recently identified an unusual expense. The company needs a solution to avoid unusual expenses. The solution should monitor costs and notify responsible stakeholders in case of unusual expenses. What solution will meet these requirements?

A. Use an AWS budget template to create a zero-spend budget.
B. Create an AWS Cost Anomaly Detection Monitor in the AWS Billing and Cost Management Console.
C. Create AWS Pricing Calculator estimates for current running workload pricing details.
D. Use Amazon CloudWatch to monitor costs and identify unusual expenses.

A

B. Create an AWS Cost Anomaly Detection Monitor in the AWS Billing and Cost Management Console.

AWS Cost Anomaly Detection uses machine learning to identify unexpected spending patterns and anomalies in your costs. It can automatically detect unusual expenses and send notifications, making it suitable for the described scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

688 # A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign. The business needs to analyze clickstream data in Amazon S3 quickly. Next, the business needs to determine whether to process the data further in the data pipeline. Which solution will meet these requirements with the LESS operating overhead?

A. Create external tables in a Spark catalog. Set up jobs in AWS Glue to query the data.
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query data.
C. Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query data.
D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to query data.

A

B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query data.

AWS Glue Crawler can automatically discover and catalog metadata about clickstream data in S3. Amazon Athena, as a serverless query service, allows you to perform fast ad hoc SQL queries on data without needing to configure and manage the infrastructure.

AWS Glue is a fully managed extract, transform, and load (ETL) service, and Athena is a serverless query service that allows you to analyze data directly in Amazon S3 using SQL queries. By configuring an AWS Glue crawler to crawl the data, you can create a schema for the data, and then use Athena (remember, where it lives) to query the data directly without the need to load it into a separate database. This minimizes operational overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

689 # A company runs an SMB file server in its data center. The file server stores large files that are frequently accessed by the company for up to 7 days after the file creation date. After 7 days, the company should be able to access the files with a maximum recovery time of 24 hours. What solution will meet these requirements?

A. Use AWS DataSync to copy data older than 7 days from the SMB file server to AWS.
B. Create an Amazon S3 file gateway to increase the company’s storage space. Create an S3 lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
C. Create an Amazon FSx file gateway to increase your company’s storage space. Create an Amazon S3 lifecycle policy to transition data after 7 days.
D. Configure access to Amazon S3 for each user. Create an S3 lifecycle policy to transition data to S3 Glacier Flexible Retrieval after 7 days.

A

B. Create an Amazon S3 file gateway to increase the company’s storage space. Create an S3 lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.

With an S3 file gateway, you can present an S3 bucket as a file share. Using an S3 lifecycle policy to transition data to Glacier Deep Archive after 7 days allows for cost savings, and recovery time is within the specified 24 hours.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

690 # A company runs a web application on Amazon EC2 instances in an Auto Scaling group. The application uses a database running on an Amazon RDS for PostgreSQL DB instance. The app runs slowly when traffic increases. The database experiences heavy read load during high traffic periods. What actions should a solutions architect take to resolve these performance issues? (Choose two.) A. Enable auto-scaling for the database instance. B. Create a read replica for the database instance. Configure the application to send read traffic to the read replica. C. Convert the database instance to a Multi-AZ DB instance deployment. Configure the application to send read traffic to the standby database instance. D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster. E. Configure Auto Scaling group subnets to ensure that EC2 instances are provisioned in the same availability zone as the database instance.

A

B. Create a read replica for the database instance. Configure the application to send read traffic to the read replica.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.

By creating a read replica, you offload read traffic from the primary database instance to the replica, distributing the read load and improving overall performance. This is a common approach to scale out read-heavy database workloads.

Amazon ElastiCache is a managed caching service that can help improve application performance by caching frequently accessed data. Cached query results in ElastiCache can reduce the load on the PostgreSQL database, especially for repeated read queries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

691 # A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an application. The company creates a snapshot of each EBS volume every day to meet compliance requirements. The company wants to implement an architecture that prevents accidental deletion of EBS volume snapshots. The solution should not change the administrative rights of the storage administrator user. Which solution will meet these requirements with the LEAST administrative effort?

A. Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance. Use the AWS CLI from the new EC2 instance to delete snapshots.
B. Create an IAM policy that denies the deletion of snapshots. Attaches the policy to the storage administrator user.
C. Add tags to snapshots. Create Recycle Bin retention rules for EBS snapshots that have the labels.
D. Lock EBS snapshots to prevent deletion.

A

D. Lock EBS snapshots to prevent deletion.

Amazon EBS provides a built-in feature to lock snapshots, preventing them from being deleted. This is a straightforward and effective solution that does not involve creating additional IAM roles, policies, or tags. It directly addresses the requirement to prevent accidental deletion with minimal administrative effort.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

692 # An enterprise application uses network load balancers, auto-scaling groups, Amazon EC2 instances, and databases that are deployed in an Amazon VPC. The company wants to capture information about traffic to and from network interfaces in near real time in its Amazon VPC. The company wants to send the information to Amazon OpenSearch Service for analysis. What solution will meet these requirements?

A. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Streams to stream log group logs to the OpenSearch service.
B. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Firehose to stream log group logs to the OpenSearch service.
C. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Streams to stream trace records to the OpenSearch service.
D. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Firehose to stream the track logs to the OpenSearch service.

A

B. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Firehose to stream log group logs to the OpenSearch service.

Other answers:
A. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Streams to stream log group logs to the OpenSearch service. This option involves configuring VPC flow logs to capture network traffic information and send the logs to an Amazon CloudWatch log group. Next, it suggests using Amazon Kinesis Data Streams to stream the log group logs to the Amazon OpenSearch service. While this is technically feasible, using Kinesis Data Streams could introduce unnecessary complexity for this use case.

C. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Streams to stream trace records to the OpenSearch service. This option involves using AWS CloudTrail to capture VPC stream logs and then using Amazon Kinesis Data Streams to stream the logs to the Amazon OpenSearch service. However, CloudTrail is typically used to log API activity and may not provide the detailed network traffic information captured by VPC flow logs.

D. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Firehose to stream the track logs to the OpenSearch service. Like option C, this option involves using AWS CloudTrail to capture VPC flow logs, but suggests using Amazon Kinesis Data Firehose instead of Kinesis Data Streams. Again, CloudTrail might not be the optimal choice for capturing detailed information about network traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

693 # A company is developing an application that will run on an Amazon Elastic Kubernetes Service (Amazon EKS) production cluster. The EKS cluster has managed groups of nodes that are provisioned with on-demand instances. The company needs a dedicated EKS cluster for development work. The company will use the development cluster infrequently to test the resilience of the application. The EKS cluster must manage all nodes. Which solution will meet these requirements in the MOST cost-effective way?

A. Create a managed node group that contains only spot instances.
B. Create two managed node groups. Provision a group of nodes with on-demand instances. Provision the second node group with spot instances.
C. Create an auto-scaling group that has a startup configuration that uses spot instances. Configure the user details to add the nodes to the EKS cluster.
D. Create a managed node group that contains only on-demand instances.

A

B. Create two managed node groups. Provision a group of nodes with on-demand instances. Provision the second node group with spot instances.

This option allows the company to have a dedicated EKS cluster for development work. By creating two pools of managed nodes, one using on-demand instances and the other using spot instances, the company can manage costs effectively. On-demand instances provide stability and reliability, which is suitable for development work that requires consistent and predictable performance.

Spot instances offer cost savings, but come with the trade-off of potential short-notice termination. However, for infrequent testing and resilience experiments, one-time instances can be used to optimize costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

694 # A company stores sensitive data in Amazon S3. A solutions architect needs to create an encryption solution. The enterprise needs to fully control users’ ability to create, rotate, and deactivate encryption keys with minimal effort for any data that needs to be encrypted. What solution will meet these requirements?

A. Use default server-side encryption with Amazon S3 Managed Encryption Keys (SSE-S3) to store sensitive data.
B. Create a customer-managed key using AWS Key Management Service (AWS KMS). Use the new key to encrypt S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
C. Create an AWS managed key using the AWS Key Management Service (AWS KMS). Use the new key to encrypt S3 objects using server-side encryption with AWS KMS keys (SSE-KMS).
D. Download S3 objects to an Amazon EC2 instance. Encrypts objects using customer-managed keys. Upload the encrypted objects back to Amazon S3.

A

B. Create a customer-managed key using AWS Key Management Service (AWS KMS). Use the new key to encrypt S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).

AWS KMS allows you to create and manage customer managed keys (CMKs), giving you full control over the key lifecycle. This includes the ability to create, rotate, and deactivate keys as needed. Using server-side encryption with AWS KMS keys (SSE-KMS) ensures that S3 objects are encrypted with the specified customer-managed key. This provides a secure, managed approach to encrypt sensitive data in Amazon S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

695 # A company wants to backup its on-premises virtual machines (VMs) to AWS. The company’s backup solution exports local backups to an Amazon S3 bucket as objects. S3 backups should be retained for 30 days and should be automatically deleted after 30 days. What combination of steps will meet these requirements? (Choose three.)

A. Create an S3 bucket that has S3 object locking enabled.

B. Create an S3 bucket that has object versioning enabled.

C. Set a default retention period of 30 days for the objects.

D. Configure an S3 lifecycle policy to protect objects for 30 days.

E. Configure an S3 lifecycle policy to expire the objects after 30 days.

F. Configure the backup solution to tag objects with a 30-day retention period

A

A. Create an S3 bucket that has S3 object locking enabled.
C. Set a default retention period of 30 days for the objects.
E. Configure an S3 lifecycle policy to expire the objects after 30 days.

S3 object locking ensures that objects in the bucket cannot be deleted or modified for a specified retention period. This helps meet the requirement to retain backups for 30 days.

Set a default retention period on the S3 bucket, specifying that objects within the bucket are locked for a duration of 30 days. This enforces the retention policy on the objects.

Use an S3 lifecycle policy to automatically expire (delete) objects in the S3 bucket after the specified 30-day retention period. This ensures that backups are automatically deleted after the retention period.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

696 # A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon EFS) file system and to another S3 bucket. Files must be copied continuously. New files are added to the original S3 bucket consistently. Copied files should be overwritten only if the source file changes. Which solution will meet these requirements with the LESS operating overhead?

A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the target S3 bucket and EFS file system. Set the transfer mode to transfer only the data that has changed.
B. Create an AWS Lambda function. Mount the file system in the function. Configure an S3 event notification to invoke the function when files are created and changed in Amazon S3. Configure the function to copy files to the file system and the destination S3 bucket.
C. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the target S3 bucket and EFS file system. Set the transfer mode to transfer all data.
D. Start an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create a script to routinely synchronize all objects that changed in the source S3 bucket with the destination S3 bucket and the mounted file system.

A

A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the target S3 bucket and EFS file system. Set the transfer mode to transfer only the data that has changed.

AWS DataSync is a managed service that makes it easy to automate, accelerate, and simplify data transfers between on-premises storage systems and AWS storage services. By configuring the transfer mode to transfer only data that has changed, the solution ensures that only changed data is transferred, reducing operational overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

697 # A company uses Amazon EC2 instances and stores data on Amazon Elastic Block Store (Amazon EBS) volumes. The company must ensure that all data is encrypted at rest using AWS Key Management Service (AWS KMS). The company must be able to control the rotation of encryption keys. Which solution will meet these requirements with the LESS operating overhead?

A. Create a customer-managed key. Use the key to encrypt EBS volumes.
B. Use an AWS managed key to encrypt EBS volumes. Use the key to set automatic key rotation.
C. Create an external KMS key with imported key material. Use the key to encrypt EBS volumes.
D. Use an AWS-owned key to encrypt EBS volumes.

A

A. Create a customer-managed key. Use the key to encrypt EBS volumes.

By creating a customer-managed key in AWS Key Management Service (AWS KMS), your business gains control over key rotation and can manage key policies. This enables encryption of EBS volumes with a key that the enterprise can rotate as needed, providing flexibility and control with minimal operational overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

698 # An enterprise needs a solution to enforce encryption of data at rest on Amazon EC2 instances. The solution should automatically identify non-compliant resources and enforce compliance policies based on the findings. Which solution will meet these requirements with the LEAST administrative overhead?

A. Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of unencrypted EBS volumes.
B. Use AWS Key Management Service (AWS KMS) to manage access to encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Lambda and Amazon EventBridge to automate the detection and remediation of unencrypted EBS volumes.
C. Use Amazon Macie to discover unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager automation rules to automatically encrypt existing and new EBS volumes.
D. Use the Amazon Inspector to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager automation rules to automatically encrypt existing and new EBS volumes.

A

A. Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of unencrypted EBS volumes.

IAM policies can help control the creation of encrypted EBS volumes, and combining them with AWS Config and Systems Manager provides a comprehensive solution for detection and remediation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

699 # A company wants to migrate its on premises web applications to AWS. The company is located close the eu-central-1 region. Because of regulations, the company cannot launch some of its applications on eu-central-1. The company wants to achieve single-digit millisecond latency. What solution will meet these requirements?

A. Deploy the applications to eu-central-1. Extend the enterprise VPC from eu-central-1 to an edge location on Amazon CloudFront.
B. Deploy the applications in AWS local zones by extending the company’s VPC from eu-central-1 to the chosen local zone.
C. Deploy the applications to eu-central-1. Extend the eu-central-1 enterprise VPC to regional edge caches on Amazon CloudFront.
D. Deploy applications to AWS wavelength zones by extending the eu-central-1 enterprise VPC to the chosen wavelength zone.

A

B. Deploy the applications in AWS local zones by extending the company’s VPC from eu-central-1 to the chosen local zone.

AWS Local Zones provide low-latency access to AWS services in specific geographic locations. Deploying applications to local zones can deliver the desired low-latency experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

700 # A company is migrating its on-premises multi-tier application to AWS. The application consists of a single-node MySQL database and a multi-node web tier. The company should minimize changes to the application during the migration. The company wants to improve the resilience of applications after migration. What combination of steps will meet these requirements? (Choose two.)

A. Migrate the web tier to Amazon EC2 instances in an auto-scaling group behind an application load balancer.

B. Migrate the database to Amazon EC2 instances in an auto-scaling group behind a network load balancer.

C. Migrate the database to an Amazon RDS Multi-AZ deployment.

D. Migrate the web tier to an AWS Lambda function.

E. Migrate the database to an Amazon DynamoDB table.

A

A. Migrate the web tier to Amazon EC2 instances in an auto-scaling group behind an application load balancer.
C. Migrate the database to an Amazon RDS Multi-AZ deployment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

701 # A company’s e-commerce website has unpredictable traffic and uses AWS Lambda functions to directly access a private Amazon RDS for PostgreSQL DB instance. The company wants to maintain predictable database performance and ensure that Lambda invocations do not overload the database with too many connections. What should a solutions architect do to meet these requirements?

A. Point the client controller to a custom RDS endpoint. Deploy Lambda functions within a VPC.
B. Point the client driver to an RDS proxy endpoint. Deploy Lambda functions within a VPC.
C. Point the client controller to a custom RDS endpoint. Deploy Lambda functions outside of a VPC.
D. Point the client controller to an RDS proxy endpoint. Deploy Lambda functions outside of a VPC.

A

B. Point the client driver to an RDS proxy endpoint. Deploy Lambda functions within a VPC.

RDS is in the private subnet. So, to connect to RDS the proxy needs to be in the same VPC https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

702 # A company is creating an application. The company stores application testing data in multiple local locations. Your business needs to connect on-premises locations to VPCs in an AWS Region in the AWS Cloud. The number of accounts and VPCs will increase over the next year. The network architecture must simplify the management of new connections and must provide the ability to scale. Which solution will meet these requirements with the LEAST administrative overhead?

A. Create a peering connection between the VPCs. Create a VPN connection between VPCs and on-premises locations.
B. Start an Amazon EC2 instance. On your instance, include VPN software that uses a VPN connection to connect all VPCs and on-premises locations.
C. Create a transit gateway. Create VPC attachments for VPC connections. Create VPN attachments for on-premises connections.
D. Create an AWS Direct Connect connection between on-premises locations and a central VPC. Connect the core VPC to other VPCs by using peering connections.

A

C. Create a transit gateway. Create VPC attachments for VPC connections. Create VPN attachments for on-premises connections.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

703 # A company using AWS needs a solution to predict the resources needed for manufacturing processes each month. The solution must use historical values ​​that are currently stored in an Amazon S3 bucket. The company has no experience in machine learning (ML) and wants to use a managed service for training and predictions. What combination of steps will meet these requirements? (Choose two.)

A. Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference.
B. Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.
C. Configure an AWS Lambda function with a function URL that uses Amazon SageMaker endpoints to create predictions based on the inputs.
D. Configure an AWS Lambda function with a function URL that uses an Amazon forecast predictor to create a prediction based on the inputs.
E. Train an Amazon Forsecast predictor using historical data from the S3 bucket.

A

D. Configure an AWS Lambda function with a function URL that uses an Amazon forecast predictor to create a prediction based on the inputs.
E. Train an Amazon Forsecast predictor using historical data from the S3 bucket.

E: Amazon Forecast is a fully managed service that uses machine learning (ML) to generate highly accurate forecasts without requiring any prior ML experience. Forecast is applicable in a wide variety of use cases, including estimating product demand, energy demand, workforce planning, computing cloud infrastructure usage, traffic demand, supply chain optimization, and financial planning. D: Publish demand using AWS Lambda, AWS Step Functions, and Amazon CloudWatch Events rule to periodically (hourly) query the database and write the past X-months (count from the current timestamp) demand data into the source Amazon S3. https://aws.amazon.com/blogs/machine-learning/automating-your-amazon-forecast-workflow-with-lambda-step-functions-and-cloudwatch-events-rule/

“Alternatively, if you are looking for a fully managed service to deliver highly accurate forecasts, without writing code, we recommend checking out Amazon Forecast. Amazon Forecast is a time-series forecasting service based on machine learning (ML) and built for business metrics analysis.” https://aws.amazon.com/blogs/machine-learning/deep-demand-forecasting-with-amazon-sagemaker/

A. Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference.
B. Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.

Amazon SageMaker is a fully managed service that allows you to build, train, and deploy machine learning (ML) models. To predict the resources needed for manufacturing processes based on historical data, you can use SageMaker to train a model.

Once the model is trained, you can deploy it using SageMaker, creating an endpoint for inference. This endpoint can be used to make predictions based on new data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

704 # A company manages AWS accounts in AWS organizations. AWS IAM (AWS Single Sign-On) Identity Center and AWS Control Tower are configured for the accounts. The company wants to manage multiple user permissions across all accounts. Permissions will be used by multiple IAM users and should be split between the developer and administrator teams. Each team requires different permissions. The company wants a solution that includes new users who are hired on both teams. Which solution will meet these requirements with the LESS operating overhead?

A. Create individual users in the IAM Identity Center for each account. Create separate developer and administrator groups in IAM Identity Center. Assign users to the appropriate groups. Create a custom IAM policy for each group to set detailed permissions.
B. Create individual users in the IAM Identity Center for each account. Create separate developer and administrator groups in IAM Identity Center. Assign users to the appropriate groups. Attach AWS managed IAM policies to each user as needed for fine-grained permissions.
C. Create individual users in the IAM Identity Center. Create new developer and administrator groups in the IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each group. Assign the new groups to the appropriate accounts. Assign the new permission sets to the new groups. When new users are hired, add them to the appropriate group.
D. Create individual users in the IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each user. Assign users to the appropriate accounts. Grant additional IAM permissions to users from specific accounts. When new users are hired, add them to the IAM Identity Center and assign them to accounts.

A

C. Create individual users in the IAM Identity Center. Create new developer and administrator groups in the IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each group. Assign the new groups to the appropriate accounts. Assign the new permission sets to the new groups. When new users are hired, add them to the appropriate group.

The AWS Identity and Access Management (IAM) Identity Hub (AWS SSO) provides a centralized place to manage access to multiple AWS accounts. In this scenario, creating separate groups for developers and administrators in IAM Identity Center allows for easier management of permissions. By creating new permission sets that include the appropriate IAM policies for each group, you can assign these permission sets to the respective groups. This approach provides a simplified way to manage permissions for developer and administrator teams. When new users are hired, you can add them to the appropriate group, and they automatically inherit the permissions associated with that group. This reduces operational overhead when onboarding new users, ensuring they get the necessary permissions based on their roles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

705 # A company wants to standardize its encryption strategy by Amazon Elastic Block Store (Amazon EBS) volume. The company also wants to minimize the cost and configuration effort required to operate bulk encryption verification. What solution will meet these requirements?

A. Write API calls to describe the EBS volumes and to confirm that the EBS volumes are encrypted. Use Amazon EventBridge to schedule an AWS Lambda function to execute API calls.
B. Write API calls to describe the EBS volumes and to confirm that the EBS volumes are encrypted. Run the API calls in an AWS Fargate task.
C. Create an AWS Identity and Access Management (IAM) policy that requires the use of tags on EBS volumes. Use AWS Cost Explorer to display resources that are not tagged correctly. Encrypt untagged resources manually.
D. Create an AWS configuration rule for Amazon EBS to evaluate whether a volume is encrypted and to flag the volume if it is not encrypted.

A

D. Create an AWS configuration rule for Amazon EBS to evaluate whether a volume is encrypted and to flag the volume if it is not encrypted.

AWS Config allows you to create rules that automatically check whether your AWS resources meet your desired configurations. In this scenario, you want to standardize your Amazon Elastic Block Store (Amazon EBS) volume encryption strategy and minimize the configuration cost and effort to operate volume encryption verification.

By creating an AWS configuration rule specifically for Amazon EBS to evaluate whether a volume is encrypted, you can automate the process of verifying and flagging non-compliant resources. This solution is cost-effective because AWS Config provides a managed service for configuration compliance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

706 # A company regularly uploads GB-sized files to Amazon S3. After the company uploads the files, the company uses a fleet of Amazon EC2 spot instances to transcode the file format. The business needs to scale performance when the business uploads data from the on-premises data center to Amazon S3 and when the business downloads data from Amazon S3 to EC2 instances. What solutions will meet these requirements? (Choose two.)

A. Use the access point to the S3 bucket instead of accessing the S3 bucket directly.
B. Upload the files to multiple S3 buckets.
C. Use S3 multipart uploads.
D. Fetch multiple byte-ranges from an object in parallel.
E. Add a random prefix to each object when uploading files.

A

C. Use S3 multipart uploads.
D. Fetch multiple byte-ranges from an object in parallel.

S3 multipart uploads allow parallel uploading of parts of a large object, improving performance during the upload process. This is particularly beneficial for large files as it allows simultaneous uploads of different parts, improving overall upload performance. Multipart loads are well suited for scaling performance during data loads.

Fetching multiple byte ranges in parallel is a strategy to improve download performance. By making simultaneous requests for different parts or ranges of an object, you can efficiently use available bandwidth and reduce the time required to download large files. This approach aligns with the goal of scaling performance during data downloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

707 # A solutions architect is designing a shared storage solution for a web application that is deployed across multiple availability zones. The web application runs on Amazon EC2 instances that are in an Auto Scaling group. The company plans to make frequent changes to the content. The solution should have great consistency in returning new content as soon as changes occur. What solutions meet these requirements? (Choose two.)

A. Use the AWS Storage Gateway Volume Gateway Internet Small Computer Systems Interface (iSCSI) block storage that is mounted on the individual EC2 instances.

B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the individual EC2 instances.

C. Create an Amazon Elastic Block Store (Amazon EBS) shared volume. Mount the EBS volume on the individual EC2 instances.

D. Use AWS DataSync to perform continuous data synchronization between EC2 hosts in the Auto Scaling group.

E. Create an Amazon S3 bucket to store web content. Set the Cache-Control header metadata to no-cache. Use Amazon CloudFront to deliver content.

A

B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the individual EC2 instances.
E. Create an Amazon S3 bucket to store web content. Set the metadata for the Cache-Control header to no-cache. Use Amazon CloudFront to deliver content.

Amazon EFS is a scalable, shared file storage service. It supports NFS and allows multiple EC2 instances to access the same file system at the same time. This option is suitable for achieving strong consistency and sharing content between instances, making it a good choice for web applications deployed across multiple availability zones.

Amazon S3 can be used as a highly available and scalable storage solution. However, when immediate consistency is crucial, S3 may have potential consistency delays. Setting the cache control header to no cache helps minimize caching, but might not ensure strong consistency for immediate content updates.

In summary, options B (Amazon EFS) and E (Amazon S3 with CloudFront) are more aligned with the goal of achieving strong consistency and sharing content between multiple instances in an Auto Scaling group. Among them, Amazon EFS is a dedicated file storage service designed for this purpose and is often a suitable choice for shared storage in distributed environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

708 # A company is deploying an application to three AWS regions using an application load balancer. Amazon Route 53 will be used to distribute traffic between these regions. Which Route 53 configuration should a solutions architect use to provide the highest performing experience?

A. Create an A record with a latency policy.
B. Create an A record with a geolocation policy.
C. Create a CNAME record with a failover policy.
D. Create a CNAME record with a geoproximity policy.

A

A. Create an A record with a latency policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

709 # A company has a web application that includes an embedded NoSQL database. The application runs on Amazon EC2 instances behind an application load balancer (ALB). Instances run in an Amazon EC2 auto-scaling group in a single availability zone. A recent increase in traffic requires the application to be highly available and the database to finally be consistent. Which solution will meet these requirements with the LESS operating overhead?

A. Replace the ALB with a network load balancer. Keep the NoSQL database integrated with your replication service on EC2 instances.
B. Replace the ALB with a network load balancer. Migrate the integrated NoSQL database to Amazon DynamoDB using the AWS Database Migration Service (AWS DMS).
C. Modify the Auto Scaling group to use EC2 instances in three availability zones. Keep the NoSQL database integrated with your replication service on EC2 instances.
D. Modify the Auto Scaling group to use EC2 instances across three availability zones. Migrate the embedded NoSQL database to Amazon DynamoDB by using the AWS Database Migration Service (AWS DMS).

A

D. Modify the Auto Scaling group to use EC2 instances across three availability zones. Migrate the embedded NoSQL database to Amazon DynamoDB by using the AWS Database Migration Service (AWS DMS).

Option D (Modify the auto-scaling group to use EC2 instances in three availability zones and migrate the integrated NoSQL database to Amazon DynamoDB) provides high availability, scalability, and reduces operational overhead by leveraging a managed service like DynamoDB. It aligns well with the requirements of a highly available system and is ultimately consistent with the least operational overhead.

NOTE: C. Modify the auto-scaling group to use EC2 instances in three availability zones. Keep the NoSQL database integrated with your replication service on EC2 instances.
Explanation: Scaling across three availability zones can improve availability, but maintaining the NoSQL database integrated with the replication service on EC2 instances still involves operational complexity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

710 # A company is building a shopping application on AWS. The application offers a catalog that changes once a month and needs to scale with traffic volume. The company wants the lowest possible latency of the application. Each user’s shopping cart data must be widely available. The user’s session data must be available even if the user is offline and logs back in. What should a solutions architect do to ensure that shopping cart data is preserved at all times?

A. Configure an application load balancer to enable the sticky sessions (session affinity) feature to access the catalog in Amazon Aurora.
B. Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and shopping cart data from the user’s session.
C. Configure the Amazon OpenSearch service to cache Amazon DynamoDB catalog data and shopping cart data from the user session.
D. Configure an Amazon EC2 instance with Amazon Elastic Block Store (Amazon EBS) storage for the catalog and shopping cart. Set up automated snapshots.

A

B. Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and shopping cart data from the user’s session.

Amazon ElastiCache for Redis is a managed caching service that can be used to cache frequently accessed data. You can improve performance and help preserve shopping cart data by storing it in Redis, which is an in-memory data store.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

711 # A company is building a microservices-based application to be deployed to Amazon Elastic Kubernetes Service (Amazon EKS). The microservices will interact with each other. The company wants to ensure that the application is observable to identify performance issues in the future. What solution will meet these requirements?

A. Configure the application to use Amazon ElastiCache to reduce the number of requests that are sent to the microservices.
B. Configure Amazon CloudWatch Container Insights to collect metrics from EKS clusters. Configure AWS X-Ray to trace the requests between microservices.
C. Configure AWS CloudTrail to review API calls. Create an Amazon QuickSight dashboard to observe microservice interactions.
D. Use AWS Trusted Advisor to understand application performance.

A

B. Configure Amazon CloudWatch Container Insights to collect metrics from EKS clusters. Configure AWS X-Ray to trace the requests between microservices.

Amazon CloudWatch Container Insights provides monitoring and observability for containerized applications. Collects metrics from EKS clusters, providing information on resource utilization and application performance. AWS X-Ray, on the other hand, helps track requests as they flow through different microservices, helping to identify bottlenecks and performance issues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

712 # A company needs to provide customers with secure access to their data. The company processes customer data and stores the results in an Amazon S3 bucket. All data is subject to strict regulations and security requirements. Data must be encrypted at rest. Each customer should be able to access their data only from their AWS account. Company employees must not be able to access the data. What solution will meet these requirements?

A. Provide an AWS Certificate Manager (ACM) certificate for each client. Encrypt data on the client side. In the private certificate policy, deny access to the certificate for all directors except for a customer-provided IAM role.
B. Provision a separate AWS Key Management Service (AWS KMS) key for each client. Encrypt data server side. In the S3 bucket policy, deny decryption of data for all principals except a customer-provided IAM role.
C. Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt data server side. In each KMS key policy, deny decryption of data for all principals except an IAM role that the customer provides.
D. Provide an AWS Certificate Manager (ACM) certificate for each client. Encrypt data on the client side. In the public certificate policy, deny access to the certificate for all principals except for an IAM role that the customer provides.

A

C. Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt data server side. In each KMS key policy, deny decryption of data for all principals except an IAM role that the customer provides.

With separate KMS keys for each client and access control through KMS key policies, you can achieve the desired level of security. This allows you to explicitly deny decryption for unauthorized IAM roles.

NOTE: B. Provide a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt data server side. In the S3 bucket policy, deny data decryption for all principals except a customer-provided IAM role.– This option may not be effective because denying decryption in the S3 bucket policy might not override the key policy. It could grant access to unauthorized parties.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

713 # A solutions architect creates a VPC that includes two public subnets and two private subnets. A corporate security mandate requires the solutions architect to launch all Amazon EC2 instances on a private subnet. However, when the solutions architect starts an EC2 instance running a web server on ports 80 and 443 on a private subnet, no external Internet traffic can connect to the server. What should the solutions architect do to solve this problem?

A. Connect the EC2 instance to an auto-scaling group on a private subnet. Make sure the website’s DNS record resolves to the auto-scaling group ID.
B. Provision an Internet-facing application load balancer (ALB) in a public subnet. Add the EC2 instance to the target group that is associated with the ALEB. Ensure that the DNS record for the website resolves to the ALB.
C. Start a NAT gateway on a private subnet. Update the route table for private subnets to add a default route to the NAT gateway. Attach a public elastic IP address to the NAT gateway.
D. Ensure that the security group that is connected to the EC2 instance allows HTTP traffic on port 80 and HTTPS traffic on port 443. Ensure that the website’s DNS record resolves to the public IP address of the EC2 instance.

A

B. Provision an Internet-facing application load balancer (ALB) in a public subnet. Add the EC2 instance to the target group that is associated with the ALEB. Ensure that the DNS record for the website resolves to the ALB.

By placing an ALB on the public subnet and adding the EC2 instance to a target group associated with the ALB, external Internet traffic can reach the EC2 instance on the private subnet through the ALB. This configuration allows proper handling of web traffic.

42
Q

714 # A company is deploying a new application on Amazon Elastic Kubernetes Service (Amazon EKS) with an AWS Fargate cluster. The application needs a storage solution for data persistence. The solution must be highly available and fault tolerant. The solution must also be shared between multiple application containers. Which solution will meet these requirements with the LESS operating overhead?

A. Create Amazon Elastic Block Store (Amazon EBS) volumes in the same Availability Zones where EKS worker nodes are placed. Register the volumes to a StorageClass object on an EKS cluster. Use EBS Multi-Attach to share data between containers.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Register the file system in a StorageClass object on an EKS cluster. Use the same file system for all containers.
C. Create an Amazon Elastic Block Store (Amazon EBS) volume. Register the volume to a StorageClass object in an EKS cluster. Use the same volume for all containers.
D. Create Amazon Elastic File System (Amazon EFS) file systems in the same availability zones where the EKS worker nodes are placed. Register the file systems to a StorageClass object in an EKS cluster. Create an AWS Lambda function to synchronize data between file systems.

A

B. Create an Amazon Elastic File System (Amazon EFS) file system. Register the file system in a StorageClass object on an EKS cluster. Use the same file system for all containers.

Amazon EFS is a fully managed file storage service that supports NFS, making it easy to share data between multiple containers in an EKS cluster. It is highly available and fault tolerant by design, and its use as a shared storage solution requires minimal operational overhead.

43
Q

715 # A company has an application that uses Docker containers in its on-premises data center. The application runs on a container host that stores persistent data on a volume on the host. Container instances use stored persistent data. The company wants to move the application to a fully managed service because the company does not want to manage any servers or storage infrastructure. What solution will meet these requirements?

A. Use Amazon Elastic Kubernetes Service (Amazon EKS) with self-managed nodes. Create an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 instance. Use the EBS volume as a persistent volume mounted on the containers.
B. Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted to the containers.
C. Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate release type. Create an Amazon S3 bucket. Assign the S3 bucket as a persistent storage volume mounted to the containers.
D. Use Amazon Elastic Container Service (Amazon ECS) with an Amazon EC2 release type. Create an Amazon Elastic File System (Amazon EFS) volume. Adds the EFS volume as a persistent storage volume mounted to the containers.

A

B. Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted to the containers.

AWS Fargate is a fully managed serverless computing engine for containers, eliminating the need to manage servers. Amazon EFS is a fully managed, scalable file storage service that allows you to seamlessly share data between containers. This option meets the requirement of not managing servers or storage infrastructure.

44
Q

716 # A gaming company wants to launch a new Internet-facing application in multiple AWS regions. The application will use TCP and UDP protocols for communication. The company needs to provide high availability and minimal latency for global users. What combination of actions should a solutions architect take to meet these requirements? (Choose two.)

A. Create internal network load balancers in front of the application in each region.
B. Create external application load balancers in front of the application in each region.
C. Create an AWS global accelerator to route traffic to load balancers in each region.
D. Configure Amazon Route 53 to use a geolocation routing policy to distribute traffic.
E. Configure Amazon CloudFront to manage traffic and route requests to the application in each region

A

A. Create internal network load balancers in front of the application in each region.
C. Create an AWS global accelerator to route traffic to load balancers in each region.

  • Network Load Balancers (NLBs) can handle both TCP and UDP traffic, making them suitable for distributing game traffic within a region. However, NLBs are specific to a single region and do not provide global routing.
  • AWS Global Accelerator supports TCP and UDP protocols, making it suitable for global routing. You can direct traffic to the best-performing endpoints in different regions, providing high availability and low latency.

Given the requirement for global high availability and support for TCP and UDP traffic, option C (AWS Global Accelerator) would be the most appropriate option. It can handle global routing of TCP and UDP traffic, directing users to the best performing endpoints in different regions. If more advanced traffic management is needed within each region, you can use network load balancers (NLBs) as optional components to handle TCP/UDP traffic within each region.

45
Q

717 # A city has deployed a web application running on Amazon EC2 instances behind an Application Load Balancer (ALB). Users of the app have reported sporadic performance, which appears to be related to DDoS attacks originating from random IP addresses. The city needs a solution that requires minimal configuration changes and provides an audit trail for DDoS sources. Which solution meets these requirements?

A. Enable an AWS WAF web ACL on the ALB and configure rules to block traffic from unknown sources.
B. Subscribe to Amazon Inspector. Engage the AWS DDoS Response Team (DRT) to integrate mitigation controls into the service.
C. Subscribe to AWS Shield Advanced. Engage the AWS DDoS Response Team (DRT) to integrate mitigation controls into the service.
D. Create an Amazon CloudFront distribution for the application and set the ALB as the origin. Enable an AWS WAF web ACL on your distribution and configure rules to block traffic from unknown sources

A

C. Subscribe to AWS Shield Advanced. Engage the AWS DDoS Response Team (DRT) to integrate mitigation controls into the service.

AWS Shield Advanced is designed specifically for DDoS protection, and the involvement of the AWS DDoS Response Team (DRT) provides additional support to mitigate DDoS attacks. Requires a subscription to AWS Shield Advanced, which comes with more advanced DDoS protection features.

46
Q

718 # A company copies 200 TB of data from a recent ocean survey to AWS Snowball Edge Storage Optimized devices. The company has a high-performance computing (HPC) cluster that is hosted on AWS to search for oil and gas deposits. A solutions architect must provide the cluster with consistent sub-millisecond latency and high-performance access to data from Snowball Edge Storage Optimized devices. The company is shipping the devices back to AWS. What solution will meet these requirements?

A. Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an AWS Storage Gateway file gateway to use the S3 bucket. Access the file gateway from the HPC cluster instances.
B. Create an Amazon S3 bucket. Import the data into the S3 bucket. Set up an Amazon FSx file system for Luster and integrate it with the S3 bucket. Access the FSx for Luster file system from the HPC cluster instances.
C. Create an Amazon S3 bucket and an Amazon Elastic File System (Amazon EFS) file system. Import the data into the S3 bucket. Copy the data from the S3 bucket to the EFS file system. Access the EFS file system from the HPC cluster instances.
D. Create an Amazon FSx for Luster file system. Import data directly into the FSx for Luster file system. Access the FSx for Luster file system from the HPC cluster instances.

A

D. Create an Amazon FSx for Luster file system. Import data directly into the FSx for Luster file system. Access the FSx for Luster file system from the HPC cluster instances.

  • Amazon FSx for Luster is a high-performance, fully managed file system optimized for use with HPC workloads. By importing data directly into FSx for Luster, you can achieve low latency access and high performance. It is designed to provide high-performance, scalable access to data, making it well suited for HPC scenarios. This option minimizes the need for additional data transfer steps, resulting in efficient access to data on Snowball Edge Storage Optimized devices.
47
Q

719 # A company has NFS servers in an on-premises data center that need to periodically back up small amounts of data to Amazon S3. Which solution meets these requirements and is MOST cost effective?

A. Configure AWS Glue to copy data from on-premises servers to Amazon S3.
B. Set up an AWS DataSync agent on the on-premises servers, and synchronize the data to Amazon S3.
C. Set up an SFTP sync using AWS Transfer for SFTP to sync on-premises data to Amazon S3.
D. Set up an AWS Direct Connect connection between the on-premises data center and a VPC, and copy the data to Amazon S3.

A

B. Set up an AWS DataSync agent on the on-premises servers, and synchronize the data to Amazon S3.

  • AWS DataSync is a fully managed data transfer service that can efficiently and securely transfer data between local storage and Amazon S3. Configuring a DataSync agent on Local servers can perform incremental and parallel transfers, optimizing the use of available bandwidth and minimizing costs.
48
Q

720 # An online gaming company must maintain ultra-low latency for its game servers. The game servers run on Amazon EC2 instances. The company needs a solution that can handle millions of UDP Internet traffic requests every second. Which solution will meet these requirements in the MOST cost-effective way?

A. Configure an application load balancer with the protocol and ports required for Internet traffic. Specify EC2 instances as targets.
B. Configure a gateway load balancer for Internet traffic. Specify EC2 instances as targets.
C. Configure a network load balancer with the required protocol and ports for the Internet traffic. Specify the EC2 instances as the targets.
D. Start an identical set of game servers on EC2 instances in separate AWS regions. Routes Internet traffic to both sets of EC2 instances.

A

C. Configure a network load balancer with the required protocol and ports for the Internet traffic. Specify the EC2 instances as the targets.

  • Network Load Balancers (NLB) are designed to provide ultra-low latency and high throughput performance. They operate at the connection or network layer (layer 4) and are well suited for UDP traffic. NLB is optimized to handle millions of requests per second with minimal latency compared to application load balancers (ALBs) or gateway load balancers.
49
Q

721 # A company runs a three-tier application in a VPC. The database tier uses an Amazon RDS instance for the MySQL database. The company plans to migrate the RDS for MySQL DB instance to an Amazon Aurora PostgreSQL DB cluster. The company needs a solution that replicates the data changes that occur during migration to the new database. What combination of steps will meet these requirements? (Choose two.)

A. Use AWS Database Migration Service (AWS DMS) schema conversion to transform the database objects.

B. Use AWS Database Migration Service (AWS DMS) schema conversion to create an Aurora PostgreSQL read replica on the RDS instance for the MySQL database.

C. Configure an Aurora MySQL read replica for the RDS for MySQL DB instance.

D. Define an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) to migrate the data.

E. Promote the Aurora PostgreSQL read replica to a standalone Aurora PostgreSQL database cluster when the replication lag is zero.

A

A. Use AWS Database Migration Service (AWS DMS) schema conversion to transform the database objects.
D. Define an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) to migrate the data.

To migrate your RDS for MySQL DB instance to an Amazon Aurora PostgreSQL DB cluster and replicate data changes during the migration, you can use the following combination of steps:
A. **Use Database Migration Service Schema Conversion AWS Data Management System (AWS DMS) to transform database objects. ** - AWS DMS Schema Conversion can be used to convert MySQL database schema and objects to PostgreSQL-compatible syntax.
D. **Define an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) to migrate the data. ** - AWS DMS supports change data capture (CDC), which allows the migration service to capture changes that occur to the source database (RDS for MySQL) during the migration process. This ensures that any changes in progress are replicated to the Aurora PostgreSQL database cluster.

50
Q

722 # A company hosts a database running on an Amazon RDS instance that is deployed across multiple availability zones. The company periodically runs a script against the database to report new entries added to the database. The script running on the database negatively affects the performance of a critical application. The company needs to improve application performance with minimal costs. Which solution will meet these requirements with the LESS operating overhead?

A. Adds functionality to the script to identify the instance that has the fewest active connections. Configure the script to read from that instance to report total new entries.
B. Create a read replica of the database. Configure the script to query only the read replica to report total new entries.
C. Instruct the development team to manually export the day’s new entries into the database at the end of each day.
D. Use Amazon ElastiCache to cache common queries that the script runs against the database.

A

B. Create a read replica of the database. Configure the script to query only the read replica to report total new entries.

  • When creating a database read replica, it offloads the read-intensive workload of the reporting script from the primary database instance. This helps improve performance on the primary instance, minimizing the impact on the critical application. Additionally, read replicas are kept up to date through asynchronous replication, providing near real-time data for reporting without impacting the performance of the primary instance.
51
Q

723 # A company is using an application load balancer (ALB) to present its application to the Internet. The company finds abnormal traffic access patterns throughout the application. A solutions architect needs to improve infrastructure visibility to help the business better understand these anomalies. What is the most operationally efficient solution that meets these requirements?

A. Create a table in Amazon Athena logs for AWS CloudTrail. Create a query for the relevant information.
B. Enable ALB access logging on Amazon S3. Create a table in Amazon Athena and query the logs.
C. Enable ALB access logging in Amazon S3. Open each file in a text editor and look at each line for the relevant information.
D. Use Amazon EMR on a dedicated Amazon EC2 instance to directly query the ALB and acquire traffic access log information.

A

B. Enable ALB access logging on Amazon S3. Create a table in Amazon Athena and query the logs.

  • Enabling ALB access logging to Amazon S3 allows you to capture detailed logs of incoming requests to ALB. By creating a table in Amazon Athena and querying these logs, you gain the ability to analyze and understand traffic patterns, identify anomalies, and perform queries efficiently. Athena provides an interactive, serverless query service that allows you to analyze data directly in Amazon S3 without needing to manage the infrastructure.
52
Q

724 # A company wants to use NAT gateways in its AWS environment. Your company’s Amazon EC2 instances on private subnets must be able to connect to the public Internet through NAT gateways. What solution will meet these requirements?

A. Create public NAT gateways on the same private subnets as the EC2 instances.
B. Create private NAT gateways on the same private subnets as the EC2 instances.
C. Create public NAT gateways in public subnets in the same VPCs as the EC2 instances.
D. Create private NAT gateways in public subnets in the same VPCs as the EC2 instances.

A

C. Create public NAT gateways in public subnets in the same VPCs as the EC2 instances.

53
Q

725 # A company has an organization in AWS Organizations. The company runs Amazon EC2 instances in four AWS accounts in the root organizational unit (OU). There are three non-production accounts and one production account. The company wants to prohibit users from launching EC2 instances of a certain size on non-production accounts. The company has created a service control policy (SCP) to deny access to launch instances that use the prohibited types. What solutions for implementing the SCP will meet these requirements? (Choose two.)

A. Attach the SCP to the organization’s root OU.

B. Attach the SCP to the three non-production organizations member accounts.

C. Attaches the SCP to the organizations management account.

D. Create an OU for the production account. Attach the SCP to the OU. Move the production member account to the new OU.

E. Create an OU for the required accounts. Attach the SCP to the OU. Move non-production member accounts to the new OU.

A

B. Attach the SCP to the three non-production organizations member accounts.
- Attaching the SCP directly to non-production member accounts ensures that the policy applies specifically to those accounts. This way, the policy denies launching EC2 instances of the prohibited size on non-production accounts.
E. Create an OU for the required accounts. Attach the SCP to the OU. Move non-production member accounts to the new OU.
- By creating a separate OU for non-production accounts and attaching the SCP to that OU, you can isolate the application of the policy to only non-production accounts. Moving non-production member accounts to the new OU associates them with the SCP.

In summary, options B and E are appropriate solutions based on the requirement to prohibit users from launching EC2 instances of a certain size in non-production accounts while allowing them in the production account. Option D should be avoided, and option A and option C are not recommended as they apply to the organization-wide SCP or management account.

54
Q

726 # A company website hosted on Amazon EC2 instances processes classified data stored in Amazon S3. Due to security concerns, the company requires a private, secure connection between its EC2 resources and Amazon S3. Which solution meets these requirements?

A. Set up S3 bucket policies to allow access from a VPC endpoint.
B. Configure an IAM policy to grant read and write access to the S3 bucket.
C. Configure a NAT gateway to access resources outside the private subnet.
D. Configure an access key ID and secret access key to access the S3 bucket.

A

A. Set up S3 bucket policies to allow access from a VPC endpoint.

  • This option involves creating a VPC endpoint for Amazon S3 in your Amazon VPC. A VPC endpoint allows you to privately connect your VPC to S3 without going over the public Internet. By configuring S3 bucket policies to allow access from the VPC endpoint, you ensure that EC2 instances within your VPC can securely access S3 without requiring public Internet access. This is a more secure and recommended approach to handling sensitive data.
55
Q

727 # A company is designing a web application on AWS. The application will use a VPN connection between the company’s existing data centers and the company’s VPCs. The company uses Amazon Route 53 as its DNS service. Your application must use private DNS records to communicate with on-premises services from a VPC. Which solution will meet these requirements in the MOST secure way?

A. Create a Route 53 Resolver exit endpoint. Create a resolution rule. Associate the Resolver rule with the VPC.
B. Create a Route 53 Resolver entry endpoint. Create a resolution rule. Associate the Resolver rule with the VPC.
C. Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC.
D. Create a public area of ​​Route 53. Create a registry for each service to allow service communication

A

C. Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC.

  • This option involves creating a Route 53 private hosted zone, which allows you to define custom DNS records for private communication within your VPC. Associating the private hosted zone with the VPC ensures that DNS records are used to resolve domain names within the specified VPC. This approach is secure because it allows you to control DNS records for private communication.
56
Q

728 # A company is running a photo hosting service in the us-east-1 region. The service allows users from various countries to upload and view photos. Some photos are viewed a lot for months, and others are viewed for less than a week. The application allows uploads of up to 20 MB for each photo. The service uses photo metadata to determine which photos to show to each user. Which solution provides access to the right user in the MOST cost-effective way?

A. Save photos to Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) to cache frequently viewed items.
B. Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3 location in DynamoDB.
C. Store photos in the standard Amazon S3 storage class. Configure an S3 lifecycle policy to move photos older than 30 days to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Use object tags to keep track of metadata.
D. Save photos to the Amazon S3 Glacier storage class. Configure an S3 lifecycle policy to move photos older than 30 days to the S3 Glacier Deep Archive storage class. Store the photo metadata and its S3 location in Amazon OpenSearch Service.

A

B. Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3 location in DynamoDB.

  • Amazon S3 Intelligent-Tiering automatically moves objects between two access levels: frequent and infrequent access. It is designed for objects with unknown or changing access patterns. This is suitable for a photo hosting service where some photos are viewed a lot for months, and others are viewed for a short period. - Storing photo metadata and S3 location in DynamoDB provides a fast and scalable way to query and retrieve information about photos. DynamoDB is well suited for handling metadata and providing fast searches based on metadata.
57
Q

729 # A company runs a highly available web application on Amazon EC2 instances behind an application load balancer. The company uses Amazon CloudWatch metrics. As traffic to the web application increases, some EC2 instances become overloaded with many pending requests. CloudWatch metrics show that the number of requests processed and the time to receive responses for some EC2 instances are higher compared to other EC2 instances. The company does not want new requests to be forwarded to EC2 instances that are already overloaded. What solution will meet these requirements?

A. Use the round robin routing algorithm based on the RequestCountPerTarget and ActiveConnectionCount CloudWatch metrics.
B. Use the least outstanding requests algorithm based on the RequestCountPerTarget and ActiveConnectionCount CloudWatch metrics.
C. Use the round robin routing algorithm based on the RequestCount and TargetResponseTime CloudWatch metrics.
D. Use the least outstanding requests algorithm based on the RequestCount and TargetResponseTime CloudWatch metrics.

A

D. Use the least outstanding requests algorithm based on the RequestCount and TargetResponseTime CloudWatch metrics.

  • The “least outstanding requests” algorithm, also known as the least outstanding requests balancing algorithm, considers the number of outstanding requests and the target response time. Its goal is to distribute new requests to the instances that have fewer outstanding requests and optimal response time.
  • In this scenario, using RequestCount (to measure the number of requests) and TargetResponseTime (to evaluate the responsiveness of the instances) CloudWatch metrics together allow for a more informed decision about routing traffic to the instances that are less loaded.
58
Q

730 # A company uses Amazon EC2, AWS Fargate, and AWS Lambda to run multiple workloads in the company’s AWS account. The company wants to make full use of its calculation savings plans. The company wants to be notified when the coverage of the calculation savings plans decreases. Which solution will meet these requirements with the GREATEST operational efficiency?

A. Create a daily budget for savings plans using AWS Budgets. Configure the budget with a coverage threshold to send notifications to appropriate email recipients.
B. Create a Lambda function that runs a coverage report against the Savings Plans. Use Amazon Simple Email Service (Amazon SES) to email the report to the appropriate email recipients.
C. Create an AWS Budgets report for the savings plans budget. Set the frequency to daily.
D. Create a savings plan alert subscription. Enable all notification options. Enter an email address to receive notifications.

A

D. Create a savings plan alert subscription. Enable all notification options. Enter an email address to receive notifications.

  • Savings plan alert subscriptions allow you to set up notifications based on various thresholds, including coverage thresholds. By enabling all notification options, you can receive timely alerts through different channels when coverage decreases.
59
Q

731 # A company runs a real-time data ingestion solution on AWS. The solution consists of the latest version of Amazon Managed Streaming for Apache Kafka (Amazon MSK). The solution is deployed in a VPC on private subnets in three availability zones. A solutions architect needs to redesign the data ingestion solution to make it publicly available over the Internet. Data in transit must also be encrypted. Which solution will meet these requirements with the GREATEST operational efficiency?

A. Configure public subnets in the existing VPC. Deploy an MSK cluster in the public subnets. Update the MSK cluster security settings to enable mutual TLS authentication.

B. Create a new VPC that has public subnets. Deploy an MSK cluster on the public subnets. Update the MSK cluster security configuration to enable TLS mutual authentication.

C. Deploy an application load balancer (ALB) that uses private subnets. Configure an ALB security group inbound rule to allow incoming traffic from the VPC CIDR block for the HTTPS protocol.

D. Deploy a network load balancer (NLB) that uses private subnets. Configure an NLB listener for HTTPS communication over the Internet.

A

A. Configure public subnets in the existing VPC. Deploy an MSK cluster in the public subnets. Update the MSK cluster security settings to enable mutual TLS authentication.

  • This option takes advantage of the existing VPC, minimizing the need to create a new VPC. By deploying the MSK cluster on public subnets and enabling mutual TLS authentication, you can ensure that the MSK cluster is publicly accessible while protecting data in transit.
60
Q

732 # A company wants to migrate an on-premises legacy application to AWS. The application ingests customer order files from a local enterprise resource planning (ERP) system. The application then uploads the files to an SFTP server. The application uses a scheduled job that checks the order files every hour. The company already has an AWS account that has connectivity to the local network. The new application on AWS must support integration with the existing ERP system. The new application must be secure and resilient and must use the SFTP protocol to process orders from the ERP system immediately. What solution will meet these requirements?

A. Create an Internet-facing AWS Transfer Family SFTP internal server in two availability zones. Use Amazon S3 storage. Create an AWS Lambda function to process the order files. Use S3 event notifications to send s3:ObjectCreated:* events to the Lambda function.
B. Create an Internet-facing AWS Transfer Family SFTP server in an Availability Zone. Use Amazon Elastic File System (Amazon EFS) storage. Create an AWS Lambda function to process the order files. Use a workflow managed by the transfer family to invoke the Lambda function.
C. Create an internal AWS Transfer Family SFTP server in two availability zones. Use Amazon Elastic File System (Amazon EFS) storage. Create an AWS Step Functions state machine to process order files. Use Amazon EventBridge Scheduler to invoke the state machine and periodically check Amazon EFS order files.
D. Create an AWS Transfer Family SFTP internal server in two availability zones. Use Amazon S3 storage. Create an AWS Lambda function to process order files. Use a transfer family managed workflow to invoke the Lambda function.

A

D. Create an AWS Transfer Family SFTP internal server in two availability zones. Use Amazon S3 storage. Create an AWS Lambda function to process order files. Use a transfer family managed workflow to invoke the Lambda function.

  • Uses an internal SFTP server. - Amazon S3 provides durable, scalable storage.
  • AWS Lambda function processes order files efficiently.
  • Lambda-managed workflow allows for streamlined processing.

In summary, taking into account the clarified requirements, Option D stands out as a suitable option, leveraging an internal SFTP server in two availability zones with Amazon S3 and AWS Lambda storage for efficient order file processing.

61
Q

733 # An enterprise’s applications use Apache Hadoop and Apache Spark to process data on premises. The existing infrastructure is not scalable and is complex to manage. A solutions architect must design a scalable solution that reduces operational complexity. The solution must keep data processing on-premises. What solution will meet these requirements?

A. Use AWS Site-to-Site VPN to access on-premises Hadoop Distributed File System (HDFS) data and application. Use an Amazon EMR cluster to process the data.
B. Use AWS DataSync to connect to the local Hadoop Distributed File System (HDFS) cluster. Create an Amazon EMR cluster to process the data.
C. Migrate the Apache Hadoop application and the Apache Spark application to Amazon EMR clusters on AWS Outposts. Use the EMR clusters to process the data.
D. Use an AWS Snowball device to migrate data to an Amazon S3 bucket. Create an Amazon EMR cluster to process the data.

A

C. Migrate the Apache Hadoop application and the Apache Spark application to Amazon EMR clusters on AWS Outposts. Use the EMR clusters to process the data.

  • Use AWS Outposts for a local extension of AWS infrastructure.
  • EMR clusters for scalable and managed data processing.
62
Q

734 # A company is migrating a large amount of data from local storage to AWS. Windows, Mac, and Linux-based Amazon EC2 instances in the same AWS Region will access data using SMB and NFS storage protocols. The company will access some of the data on a routine basis. The company will access the remaining data infrequently. The company needs to design a solution to host the data. Which solution will meet these requirements with the LESS operating overhead?

A. Create an Amazon Elastic File System (Amazon EFS) volume that uses EFS Intelligent-Tiering. Use AWS DataSync to migrate data to the EFS volume.
B. Create an Amazon FSx instance for ONTAP. Create an FSx file system for ONTAP with a root volume that uses the auto-stretch policy. Migrate the data to the FSx volume for ONTAP.
C. Create an Amazon S3 bucket that uses S3 Intelligent-Tiering. Migrate data to the S3 bucket using an AWS Amazon Gateway storage S3 File gateway.
D. Create an Amazon FSx file system for OpenZFS. Migrate the data to the new volume.

A

C. Create an Amazon S3 bucket that uses S3 Intelligent-Tiering. Migrate data to the S3 bucket using an AWS Amazon Gateway storage S3 File gateway.

  • S3 Intelligent-Tiering automatically moves objects between access levels based on changing access patterns. - Storage Gateway supports SMB and NFS protocols.

https://aws.amazon.com/s3/faqs/
- The total volume of data and number of objects you can store in Amazon S3 are unlimited.

63
Q

735 # A manufacturing company runs its reporting application on AWS. The application generates each report in about 20 minutes. The application is built as a monolith running on a single Amazon EC2 instance. The application requires frequent updates of its tightly coupled modules. The application becomes complex to maintain as the company adds new features. Every time the company patches a software module, the application experiences downtime. Report generation must be restarted from the beginning after any interruption. The company wants to redesign the application so that the application can be flexible, scalable and improve gradually. The company wants to minimize application downtime. What solution will meet these requirements?

A. Run the application in AWS Lambda as a single function with maximum provisioned concurrency.
B. Run the application on Amazon EC2 Spot Instances as microservices with a default Spot Fleet allocation strategy.
C. Run the application on Amazon Elastic Container Service (Amazon ECS) as microservices with service auto-scaling.
D. Run the application on AWS Elastic Beanstalk as a single application environment with a one-time deployment strategy.

A

C. Run the application on Amazon Elastic Container Service (Amazon ECS) as microservices with service auto-scaling.

  • ECS allows running microservices with automatic service scaling based on demand.
  • Offers flexibility and scalability.
  • Option C (Amazon ECS with microservices and service auto-scaling) appears to better align with the company’s requirements for flexibility, scalability, and minimal downtime.
64
Q

736 # A company wants to redesign a large-scale web application to a serverless microservices architecture. The application uses Amazon EC2 instances and is written in Python. The company selected a web application component to test as a microservice. The component supports hundreds of requests every second. The company wants to build and test the microservice on an AWS solution that supports Python. The solution should also scale automatically and require minimal infrastructure and operational support. What solution will meet these requirements?

A. Use an auto-scaling Spot fleet of EC2 instances running the latest Amazon Linux operating system.
B. Use an AWS Elastic Beanstalk web server environment that has high availability configured.
C. Use Amazon Elastic Kubernetes Service (Amazon EKS). Start auto-scaling groups of self-managed EC2 instances.
D. Use an AWS Lambda function that runs custom-developed code.

A

D. Use an AWS Lambda function that runs custom-developed code.

  • Serverless architecture, without the need to manage the infrastructure.
  • Automatic scaling based on demand.
  • Minimal operational support.
  • Option D (AWS Lambda): Given the company’s requirements for a serverless microservices architecture with minimal infrastructure and operational support, AWS Lambda is a strong contender. It aligns well with the principles of serverless computing, automatically scaling based on demand and eliminating the need to manage the underlying infrastructure. It is important to note that the final choice could also depend on the specific application requirements and development preferences.
65
Q

737 # A company has an AWS Direct Connect connection from its on-premises location to an AWS account. The AWS account has 30 different VPCs in the same AWS Region. VPCs use virtual private interfaces (VIFs). Each VPC has a CIDR block that does not overlap with other networks under the company’s control. The company wants to centrally manage the network architecture while allowing each VPC to communicate with all other VPCs and on-premises networks. Which solution will meet these requirements with the LEAST amount of operational overhead?
A. Create a transit gateway and associate the Direct Connect connection with a new transit VIF. Turn on the transit gateway’s route propagation feature.
B. Create a direct connection gateway. Recreate the private VIFs to use the new gateway. Associate each VPC by creating new virtual private gateways.
C. Create a transit VP Connect the Direct Connect connection to the transit VP Create a peering connection between all other VPCs in the region. Update route tables.
D. Create AWS site-to-site VPN connections from on-premises to each VPC. Make sure both VPN tunnels are enabled for each connection. Activates the route propagation function.

A

A. Create a transit gateway and associate the Direct Connect connection with a new transit VIF. Turn on the transit gateway’s route propagation feature.

  • Centralized management with a transit gateway. - Simplifies routing by using route propagation.
  • Option A (Transit Gateway): This option provides centralized management using a transit gateway, simplifies routing with route propagation, and avoids the need to recreate VIFs. It is a scalable and efficient solution for connecting multiple VPCs and local networks.
66
Q

738 # A company has applications running on Amazon EC2 instances. EC2 instances connect to Amazon RDS databases using an IAM role that has policies associated with it. The company wants to use AWS Systems Manager to patch EC2 instances without disrupting running applications. What solution will meet these requirements?

A. Create a new IAM role. Attaches the AmazonSSMManagedInstanceCore policy to the new IAM role. Attach the new IAM role to the EC2 instances and the existing IAM role.
B. Create an IAM user. Attaches the AmazonSSMManagedInstanceCore policy to the IAM user. Configure Systems Manager to use the IAM user to manage EC2 instances.
C. Enable default host configuration management in Systems Manager to manage EC2 instances.
D. Delete the existing policies from the existing IAM role. Add the AmazonSSMManagedInstanceCore policy to the existing IAM role.

A

C. Enable default host configuration management in Systems Manager to manage EC2 instances.

  • This option, as clarified, seems to be a direct and efficient solution. It eliminates the need for manual changes to IAM roles and aligns with the requirement for no application disruption.
  • Default Host Management Configuration creates and applies a default IAM role to ensure that Systems Manager has permissions to manage all instances in the Region and perform automated patch scans using Patch Manager.

NOTE: Only one role can be assigned to an Amazon EC2 instance at a time, and all applications on the instance share the same role and permissions. (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html#) Suggested: Instead create 2 managed policies and attach them to the same IAM Role. Attach that IAM Role to the EC2 instance.

67
Q

739 # A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS) and the Kubernetes Horizontal Pod Autoscaler. The workload is not consistent throughout the day. A solutions architect notices that the number of nodes does not automatically scale out when the existing nodes have reached maximum capacity in the cluster, which causes performance issues. Which solution will resolve this issue with the LEAST administrative overhead?

A. Scale out the nodes by tracking the memory usage.
B. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
C. Use an AWS Lambda function to resize the EKS cluster automatically.
D. Use an Amazon EC2 Auto Scaling group to distribute the workload.

A

B. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.

  • This option is more aligned with Kubernetes best practices. The Kubernetes Cluster Autoscaler automatically scales the cluster size based on the resource requirements of the pods. It is designed to handle the dynamic nature of containerized workloads.
  • Option B, using Kubernetes Cluster Autoscaler, is probably the best option to solve the problem with the least administrative overhead. It aligns well with the Kubernetes ecosystem and provides the automation needed to scale the cluster based on the pod’s resource requirements.
68
Q

740 # A company maintains around 300 TB in standard Amazon S3 storage month after month. S3 objects are typically around 50GB in size and are often replaced by multipart uploads by your global application. The number and size of S3 objects remain constant, but the company’s S3 storage costs increase each month. How should a solutions architect reduce costs in this situation?

A. Switch from multi-part uploads to Amazon S3 transfer acceleration.
B. Enable an S3 lifecycle policy that removes incomplete multipart uploads.
C. Configure S3 inventory to prevent objects from being archived too quickly.
D. Configure Amazon CloudFront to reduce the number of objects stored in Amazon S3.

A

B. Enable an S3 lifecycle policy that removes incomplete multipart uploads.

  • Incomplete multi-part uploads may consume additional storage. Enabling a lifecycle policy to remove incomplete multi-part uploads can help reduce storage costs by cleaning up unnecessary data.
69
Q

741 # A company has implemented a multiplayer game for mobile devices. The game requires tracking the live location of players based on latitude and longitude. The game data store must support fast updates and location recovery. The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to store location data. During periods of peak usage, the database cannot maintain the performance necessary to read and write updates. The user base of the game is increasing rapidly. What should a solutions architect do to improve data tier performance?

A. Take a snapshot of the existing database instance. Restore the snapshot with Multi-AZ enabled.
B. Migrate from Amazon RDS to the Amazon OpenSearch service with OpenSearch Dashboards.
C. Deploy Amazon DynamoDB Accelerator (DAX) against the existing DB instance. Modify the game to use DAX.
D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing database instance. Modify the game to use Redis.

A

D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing database instance. Modify the game to use Redis.

  • Amazon ElastiCache for Redis is a caching service, and using it can help improve read performance by caching frequently accessed data. However, it might not be the best choice for a primary data store.
70
Q

742 # A company stores critical data in Amazon DynamoDB tables in the company’s AWS account. An IT administrator accidentally deleted a DynamoDB table. The deletion caused significant data loss and disrupted the company’s operations. The company wants to avoid these types of interruptions in the future. Which solution will meet this requirement with the LESS operating overhead?

A. Set up a trail in AWS CloudTrail. Create an Amazon EventBridge rule to delete actions. Create an AWS Lambda function to automatically restore deleted DynamoDB tables.
B. Create a backup and restore plan for the DynamoDB tables. Recover DynamoDB tables manually.
C. Configure delete protection on DynamoDB tables.
D. Enable point-in-time recovery on DynamoDB tables.

A

C. Configure delete protection on DynamoDB tables.

  • Enabling delete protection on DynamoDB tables prevents accidental deletion of the entire table. This is a simple and effective way to mitigate the risk of accidental deletions.
71
Q

743 # A company has an on-premises data center that is running out of storage capacity. The company wants to migrate its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow immediate data recovery at no additional cost. How can these requirements be met?

A. Deploy Amazon S3 Glacier Vault and enable accelerated recovery. Enable provisioned recovery capability for the workload.
B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally.
C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously backup point-in-time snapshots of your data to Amazon S3.
D. Deploy AWS Direct Connect to connect to the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously backup point-in-time snapshots of your data to Amazon S3.

A

B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally.

  • when AWS Storage Gateway is configured with stored volumes, it effectively keeps all data locally Meeting the requirement for immediate recovery of all data.
  • Asynchronous backups to Amazon S3 provide durability and off-site storage.
  • Cached Mode: In this mode, your primary data resides in Amazon S3, while frequently accessed data is cached locally for low-latency access. Stores everything in Storage Gateway On-Prem while asynchronously backing up to the cloud
72
Q

744 # A company runs a three-tier web application in a VPC across multiple availability zones. Amazon EC2 instances run in an auto-scaling group for the application tier. The company needs to make an automated scaling plan that analyzes the historical trends of the daily and weekly workload of each resource. The configuration should scale resources appropriately based on the forecast and changes in utilization. What scaling strategy should a solutions architect recommend to meet these requirements?

A. Implement dynamic scaling with step scaling based on the average CPU utilization of EC2 instances.
B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking
C. Create an automated scheduled scaling action based on web application traffic patterns.
D. Establish a simple escalation policy. Increase the cooldown period based on the startup time of the EC2 instance.

A

B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking

  • Predictive scaling uses machine learning algorithms to forecast future resource utilization based on historical data. Goal tracking helps maintain a specific utilization goal
  • Considering the requirement to analyze both daily and weekly historical workload trends and adapt to live and forecasted changes, Option B (Enable predictive scaling to forecast and scale. Configure dynamic scaling with the goal tracking) is probably the most suitable. Predictive scaling, with its machine learning capabilities, provides a proactive approach to scaling based on historical patterns.
  • Remember that the effectiveness of predictive scaling depends on the quality and stability of historical data. If the workload is highly dynamic or unpredictable, a combination of options may be necessary.
73
Q

745 # A package delivery company has an application that uses Amazon EC2 instances and an Amazon Aurora MySQL DB cluster. As the application becomes more popular, EC2 instance usage increases only slightly. Database cluster usage is increasing at a much faster rate. The company adds a read replica, which reduces database cluster usage for a short period of time. However, the burden continues to increase. The operations that cause the database cluster usage to increase are all repeated read statements that are related to the delivery details. The company needs to alleviate the effect of repeated reads on the database cluster. Which solution will meet these requirements in the MOST cost-effective way?

A. Deploy an Amazon ElastiCache for Redis cluster between the application and the database cluster.
B. Add an additional read replica to the database cluster.
C. Configure Aurora auto-scaling for Aurora read replicas.
D. Modify the database cluster to have multiple write instances.

A

A. Implement an Amazon ElastiCache for Redis cluster between the application and the DB cluster.

  • Amazon ElastiCache for Redis can serve as an in-memory caching solution, reducing the need for repeated reads from the Aurora MySQL DB cluster.
    -Considering the requirement to alleviate the effect of repeated reads on the database cluster, and the cost-effectiveness aspect, Option A (Deploy an Amazon ElastiCache for Redis cluster between the application and the database cluster data) could be the most profitable. Caching can significantly reduce the load on the database cluster by serving repeated read requests from memory.
74
Q

746 # A company has an application that uses an Amazon DynamoDB table for storage. A solutions architect discovers that many requests to the table do not return the most recent data. Company users have not reported any other issues with database performance. Latency is in an acceptable range. What design change should the solutions architect recommend?

A. Add read replicas to the table.
B. Use a global secondary index (GSI).
C. Request strongly consistent reads for the table.
D. Request eventually consistent reads for the table.

A

C. Request strongly consistent reads for the table.

  • Stongly consistent reads ensure that the most up-to-date data is returned, but can have a greater impact on performance.
75
Q

747 # A company has deployed its application on Amazon EC2 instances with an Amazon RDS database. The company used the principle of least privilege to configure database access credentials. The company’s security team wants to protect the application and database from SQL injection and other web-based attacks. Which solution will meet these requirements with the LESS operating overhead?

A. Use security groups and network ACLs to protect the database and application servers.
B. Use AWS WAF to protect the application. Use RDS parameter groups to configure security settings.
C. Use the AWS network firewall to protect the application and database.
D. Use different database accounts in the application code for different functions. Avoid granting excessive privileges to database users.

A

B. Use AWS WAF to protect the application. Use RDS parameter groups to configure security settings.

AWS WAF is designed specifically for web application firewall protection. RDS parameter groups can be used to configure database-specific security settings.

Considering the specific requirement to protect against SQL injection and other web-based attacks, the most suitable option is **Option B (Use AWS WAF to protect the application. Use RDS parameter groups to configure security settings ). ** AWS WAF is designed for web application firewall protection and allows you to create rules to filter and monitor HTTP requests, helping to mitigate common web-based attacks. RDS parameter groups can be used to configure additional database-specific security settings. This combination provides a comprehensive solution to protect both the application and the database.

76
Q

748 # An e-commerce company runs applications in AWS accounts that are part of an organization in AWS Organizations. Applications run on Amazon Aurora PostgreSQL databases in all accounts. The company needs to prevent malicious activities and must identify abnormal failed and incomplete login attempts to databases. Which solution will meet these requirements in the MOST operationally efficient manner?

A. Attach service control policies (SCPs) to the organization root to identify failed login attempts.
B. Enable the Amazon RDS protection feature in Amazon GuardDuty for the member accounts of the organization.
C. Publish the Aurora general logs to a log group to Amazon CloudWatch Logs. Exports log data to a central Amazon S3 bucket.
D. Publishes all events from the Aurora PostgreSQL database on AWS CloudTrail to a central Amazon S3 bucket.

A

B. Enable the Amazon RDS protection feature in Amazon GuardDuty for the member accounts of the organization.

  • RDS Protection in GuardDuty analyzes RDS login activity for potential access threats and generates findings when suspicious behavior is detected.
  • This option directly addresses the requirement of preventing malicious activity and identifying abnormal login attempts to Amazon Aurora databases, making it an effective option.

The most operationally efficient solution to prevent malicious activity and identify abnormal login attempts to Amazon Aurora databases. Provides automated threat detection designed specifically for RDS login activity without the need for additional infrastructure

77
Q

749 # A company has an AWS Direct Connect connection from its corporate data center to its VPC in the us-east-1 region. The company recently acquired a corporation that has multiple VPCs and a Direct Connect connection between its on-premises data center and the eu-west-2 region. CIDR blocks for enterprise and corporation VPCs do not overlap. The company requires connectivity between two regions and data centers. The company needs a solution that is scalable while reducing operating expenses. What should a solutions architect do to meet these requirements?

A. Establish VPC cross-region peering between the VPC in us-east-1 and the VPCs in eu-west-2.
B. Create virtual private interfaces from the Direct Connect connection on us-east-1 to the VPCs on eu-west-2.
C. Establish VPN devices in a fully interleaved VPN network hosted by Amazon EC2. Use AWS VPN CloudHub to send and receive data between data centers and each VPC.
D. Connect the existing Direct Connect connection to a Direct Connect gateway. Routes traffic from the virtual private gateways in the VPCs in each region to the Direct Connect gateway.

A

D. Connect the existing Direct Connect connection to a Direct Connect gateway. Routes traffic from the virtual private gateways in the VPCs in each region to the Direct Connect gateway.

  • The Direct Connect gateway allows you to connect multiple VPCs in different regions to the same Direct Connect connection. Simplifies network architecture.
78
Q

750 # A company is developing a mobile game that transmits score updates to a backend processor and then publishes the results to a leaderboard. A solutions architect needs to design a solution that can handle large traffic spikes, process mobile game updates in order of receipt, and store the processed updates in a highly available database. The company also wants to minimize the management overhead required to maintain the solution. What should the solutions architect do to meet these requirements?

A. Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams with AWS Lambda. Store the processed updates in Amazon DynamoDB.
B. Push score updates to Amazon Kinesis Data Streams. Process updates with a fleet of Amazon EC2 instances configured for auto-scaling. Stores processed updates in Amazon Redshift.
C. Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe an AWS Lambda function to the SNS topic to process updates. Store the processed updates in a SQL database running on Amazon EC2.
D. Send score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use a fleet of auto-scaling Amazon EC2 instances to process updates to the SQS queue. Store the processed updates in an Amazon RDS Multi-AZ DB instance.

A

A. Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams with AWS Lambda. Store the processed updates in Amazon DynamoDB.

  • Amazon Kinesis Data Streams can handle large traffic spikes, and AWS Lambda allows for first-come, first-served processing. Storing processed updates in DynamoDB provides a highly available database.
79
Q

751 # A company has multiple AWS accounts with applications deployed in the US West-2 region. Application logs are stored within Amazon S3 buckets in each account. The company wants to build a centralized log analysis solution that uses a single S3 bucket. Logs should not leave West-2, and the company wants to incur minimal operating overhead. Which solution meets these requirements and is MOST cost effective?

A. Create an S3 lifecycle policy that copies objects from one of the S3 application buckets to the centralized S3 bucket.
B. Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket on us-west-2. Use this S3 bucket for log analysis.
C. Write a script that uses the PutObject API operation every day to copy all the contents of the buckets to another S3 bucket on us-west-2. Use this S3 bucket for log analysis.
D. Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to S3 buckets (event s3:ObjectCreated:*). Copy the logs to another S3 bucket on us-west-2. Use this S3 bucket for log analysis.

A

B. Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket on us-west-2. Use this S3 bucket for log analysis.

  • Built-in S3 function, minimal operating overhead.
  • Reduced latency and near real-time replication.
  • Amazon S3 SRR is an S3 feature that automatically replicates data between buckets within the same AWS Region. With SRR, you can set up replication at a bucket level, a shared prefix level, or an object level using S3 object tags. You can use SRR to make one or more copies of your data in the same AWS Region. SRR helps you address data sovereignty and compliance requirements by keeping a copy of your data in a separate AWS account in the same region as the original. (https://aws.amazon.com/s3/features/replication/#:~:text=Amazon%20S3%20SRR%20is%20an,in%20the%20same%20AWS%20Region.)

Same-Region Replication (SRR) is used to copy objects across Amazon S3 buckets in the same AWS Region. SRR can help you do the following: Aggregate logs into a single bucket – If you store logs in multiple buckets or across multiple accounts, you can easily replicate logs into a single, in-Region bucket. Doing so allows for simpler processing of logs in a single location. Configure live replication between production and test accounts – If you or your customers have production and test accounts that use the same data, you can replicate objects between those multiple accounts, while maintaining object metadata. Abide by data sovereignty laws – You might be required to store multiple copies of your data in separate AWS accounts within a certain Region. Same-Region Replication can help you automatically replicate critical data when compliance regulations don’t allow the data to leave your country.

Same-Region Replication (SRR) is used to copy objects across Amazon S3 buckets in the same AWS Region. SRR can help you do the following:

Aggregate logs into a single bucket – If you store logs in multiple buckets or across multiple accounts, you can easily replicate logs into a single, in-Region bucket. Doing so allows for simpler processing of logs in a single location.
Configure live replication between production and test accounts – If you or your customers have production and test accounts that use the same data, you can replicate objects between those multiple accounts, while maintaining object metadata.
Abide by data sovereignty laws – You might be required to store multiple copies of your data in separate AWS accounts within a certain Region. Same-Region Replication can help you automatically replicate critical data when compliance regulations don’t allow the data to leave your country.

80
Q

752 # A company has an app that offers on-demand training videos to students around the world. The app also allows authorized content developers to upload videos. The data is stored in an Amazon S3 bucket in the us-east-2 region. The company has created an S3 bucket in the eu-west-2 region and an S3 bucket in the ap-southeast-1 region. The company wants to replicate the data in the new S3 buckets. The company needs to minimize latency for developers uploading videos and students streaming videos near eu-west-2 and ap-southeast-1. What combination of steps will meet these requirements with the LEAST changes to the application? (Choose two.)

A. Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure one-way replication from the us-east-2 S3 bucket to the ap-southeast-1 S3 bucket.
B. Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure one-way replication from the eu-west-2 S3 bucket to the ap-southeast-1 S3 bucket.
C. Configure two-way (two-way) replication between the S3 buckets located in the three regions.
D. Create an S3 multi-region access point. Modify the application to use the Amazon Resource Name (ARN) of the multi-region access point for video streaming. Do not modify the application to upload videos.
E. Create an S3 multi-region access point. Modify the application to use the Amazon Resource Name (ARN) of the multi-region access point for video streaming and uploading.

A

C. Configure two-way (bidirectional) replication among the S3 buckets located in all three regions.
E. Create an S3 multi-region access point. Modify the application to use the Amazon Resource Name (ARN) of the multi-region access point for video streaming and uploading.

Option C: - Settings: Bi-directional replication between S3 buckets in all three regions. - Analysis: This option guarantees bidirectional synchronization of data between the three regions, providing consistency and minimizing latency.

Option E: - Settings: Create an S3 multi-region access point for both video streaming and uploads. - Analysis: This option combines the benefits of the S3 multi-region hotspot to optimize access latency and replicate loads to the nearest low-latency region, making it a strong candidate. Considering the need to minimize latency for both uploads and access, Option C and Option E stand out. However, Option E explicitly mentions using the S3 multi-region hotspot for both streaming and uploads, making it a more complete solution. Therefore, Option E appears to be the most appropriate option.

81
Q

753 # A company has a new mobile application. Anywhere in the world, users can watch local news on the topics of their choice. Users can also post photos and videos from within the app. Users access content often in the first few minutes after the content is published. New content quickly replaces older content, and then the older content disappears. The local nature of news means that users consume 90% of the content within the AWS Region where it is uploaded. Which solution will optimize the user experience by providing the LOWEST latency for content uploads?

A. Upload and store content to Amazon S3. Use Amazon CloudFront for uploads.
B. Upload and store content to Amazon S3. Use S3 transfer acceleration for uploads.
C. Upload content to Amazon EC2 instances in the region closest to the user. Copy the data to Amazon S3.
D. Upload and store content to Amazon S3 in the region closest to the user. Use multiple Amazon CloudFront distributions.

A

B. Upload and store content to Amazon S3. Use S3 transfer acceleration for uploads.

S3 Transfer Acceleration is designed to accelerate uploads to Amazon S3 by utilizing Amazon CloudFront’s globally distributed edge locations. This option can improve the speed of content uploads.

Considering the emphasis on minimizing latency for content uploads, Option B (using S3 transfer acceleration) appears to be the most appropriate. S3 transfer acceleration is explicitly designed to speed up uploads to Amazon S3, making it a good choice for optimizing the user experience during content uploads.

82
Q

754 # A company is building a new application that uses a serverless architecture. The architecture will consist of an Amazon API Gateway REST API and AWS Lambda functions to handle incoming requests. The company wants to add a service that can send messages received from the gateway’s REST API to multiple target Lambda functions for processing. The service must provide message filtering that gives target Lambda functions the ability to receive only the messages that the functions need. Which solution will meet these requirements with the LESS operating overhead?

A. Send REST API requests from the API gateway to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe to Amazon Simple Queue Service (Amazon SQS) queues in the SNS topic. Configure the target Lambda functions to poll the different SQS queues.
B. Send the requests from the API Gateway REST API to Amazon EventBridge. Configure EventBridge to invoke the target Lambda functions.
C. Send requests from the REST API Gateway to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Configure Amazon MSK to publish messages to target Lambda functions.
D. Send requests from the REST API Gateway to multiple Amazon Simple Queue Service (Amazon SQS) queues. Configure the target Lambda functions to poll the different SQS queues.

A

B. Send the requests from the API Gateway REST API to Amazon EventBridge. Configure EventBridge to invoke the target Lambda functions.

Amazon EventBridge is a serverless event bus that simplifies event management. This option provides a scalable, serverless solution with minimal operational overhead. It allows direct invocation of Lambda functions, reducing the need for additional components.

NOTE: Option A: - Configuration: Send requests to an SNS topic, subscribe to SQS queues to the SNS topic, and configure Lambda functions to poll SQS queues. - Analysis: This option introduces additional components (SNS and SQS), but provides flexibility in component decoupling. It might have more operational overhead than other options due to the need to manage SNS topics and SQS queues.

Option D: - Configuration: Send requests to multiple SQS queues and configure Lambda functions to poll those queues. - Analysis: Similar to Option A, this introduces additional components (SQS queues). While it offers decoupling, it may have higher operational overhead due to SQS managing multiple queues.

83
Q

755 # A company migrated millions of archive files to Amazon S3. A solutions architect needs to implement a solution that encrypts all file data using a customer-provided key. The solution must encrypt existing unencrypted objects and future objects. What solution will meet these requirements?

A. Create a list of unencrypted objects by filtering an Amazon S3 inventory report. Configure an S3 batch operations job to encrypt the objects from the list using server-side encryption with a customer-provided key (SSE-C). Configure the default S3 encryption feature to use server-side encryption with a client-provided key (SSE-C).
B. Use S3 Storage Lens metrics to identify unencrypted S3 buckets. Configure the S3 default encryption feature to use server-side encryption with AWS KMS (SSE-KMS) keys.
C. Create a list of unencrypted objects by filtering the AWS Usage Report for Amazon S3. Configure an AWS Batch job to encrypt the objects in the list using server-side encryption with AWS KMS (SSE-KMS) keys. Configure the S3 default encryption feature to use server-side encryption with AWS KMS (SSE-KMS) keys.
D. Create a list of unencrypted objects by filtering the AWS Usage Report for Amazon S3. Configure the default S3 encryption feature to use server-side encryption with a client-provided key (SSE-C).

A

A. Create a list of unencrypted objects by filtering an Amazon S3 inventory report. Configure an S3 batch operations job to encrypt the objects from the list using server-side encryption with a customer-provided key (SSE-C). Configure the default S3 encryption feature to use server-side encryption with a client-provided key (SSE-C).

  • Analysis: This option allows encryption of existing unencrypted objects and applies the default encryption for future objects. It is suitable for a customer-provided key (SSE-C).

https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/

84
Q

756 # The DNS provider that hosts a company’s domain name records is experiencing outages that are causing downtime for a website running on AWS. The company needs to migrate to a more resilient managed DNS service and wants the service to run on AWS. What should a solutions architect do to quickly migrate DNS hosting service?

A. Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file that contains the domain records hosted by the previous provider.
B. Create a private Amazon Route 53 hosted zone for the domain name. Import the zone file that contains the domain records hosted by the previous provider.
C. Create a simple AD directory in AWS. Enable zone transfer between the DNS provider and the AWS Directory Service for Microsoft Active Directory for domain records.
D. Create an Amazon Route 53 Resolver ingress endpoint in the VPC. Specify the IP addresses to which the provider’s DNS will forward DNS queries. Configure the provider’s DNS to forward DNS queries for the domain to the IP addresses that are specified on the ingress endpoint.

A

A. Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file that contains the domain records hosted by the previous provider.

  • Analysis: This option involves creating a public zone hosted on Amazon Route 53 and importing existing records. It’s a quick and easy approach to migrating DNS hosting, and is suitable for a public website.
85
Q

757 # A company is building an application on AWS that connects to an Amazon RDS database. The company wants to manage application configuration and securely store and retrieve credentials from the database and other services. Which solution will meet these requirements with the LEAST administrative overhead?

A. Use AWS AppConfig to store and manage application configuration. Use AWS Secrets Manager to store and retrieve credentials.
B. Use AWS Lambda to store and manage application configuration. Use AWS Systems Manager Parameter Store to store and retrieve credentials.
C. Use an encrypted application configuration file. Store the file in Amazon S3 for application configuration. Create another S3 file to store and retrieve the credentials.
D. Use AWS AppConfig to store and manage application configuration. Use Amazon RDS to store and retrieve credentials.

A

A. Use AWS AppConfig to store and manage application configuration. Use AWS Secrets Manager to store and retrieve credentials.

  • Analysis: AWS AppConfig is designed to manage application configurations, and AWS Secrets Manager is designed to securely store and manage sensitive information, such as database credentials. This option provides a dedicated and secure solution for both aspects.
86
Q

758 # To meet security requirements, an enterprise needs to encrypt all of its application data in transit while communicating with an Amazon RDS MySQL DB instance. A recent security audit revealed that encryption at rest is enabled using AWS Key Management Service (AWS KMS), but data in transit is not enabled. What should a solutions architect do to satisfy security requirements?

A. Enable IAM database authentication on the database.
B. Provide self-signed certificates. Use the certificates on all connections to the RDS instance.
C. Take a snapshot of the RDS instance. Restore the snapshot to a new instance with encryption enabled.
D. Download the root certificates provided by AWS. Provide the certificates on all connections to the RDS instance.

A

D. Download the root certificates provided by AWS. Provide the certificates on all connections to the RDS instance.

This option involves using the root certificates provided by AWS for SSL/TLS encryption. Downloading and configuring these certificates on application connections will encrypt data in transit. Make sure your application and database settings are configured correctly to use SSL/TLS.

To encrypt data in transit with Amazon RDS MySQL, option D is best suited. It involves using AWS-provided root certificates for SSL/TLS encryption, providing a secure way to encrypt communication between the application and the RDS instance.

87
Q

759 # A company is designing a new web service that will run on Amazon EC2 instances behind an Elastic Load Balancing (ELB) load balancer. However, many web services clients can only reach authorized IP addresses in their firewalls. What should a solutions architect recommend to meet customer needs?

A. A network load balancer with an associated Elastic IP address.
B. An application load balancer with an associated Elastic IP address.
C. An A record in an Amazon Route 53 hosted zone that points to an elastic IP address.
D. An EC2 instance with a public IP address running as a proxy in front of the load balancer.

A

A. A network load balancer with an associated Elastic IP address.

Using a Network Load Balancer instead of a Classic Load Balancer has the following benefits: Support for static IP addresses for the load balancer. https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html

88
Q

760 # A company has established a new AWS account. The account was recently provisioned and no changes were made to the default settings. The company is concerned about the security of the root user of the AWS account. What should be done to protect the root user?

A. Create IAM users for daily administrative tasks. Disable the root user.
B. Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.
C. Generate an access key for the root user. Use the access key for daily management tasks instead of the AWS Management Console.
D. Provide the root user credentials to the senior solutions architect. Have the solutions architect use the root user for daily administration tasks.

A

B. Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.

89
Q

761 # A company is implementing an application that processes streaming data in near real time. The company plans to use Amazon EC2 instances for the workload. The network architecture must be configurable to provide the lowest possible latency between nodes. What combination of network solutions will meet these requirements? (Choose two.)

A. Enable and configure enhanced networking on each EC2 instance.
B. Group EC2 instances into separate accounts.
C. Run the EC2 instances in a cluster placement group.
D. Connect multiple elastic network interfaces to each EC2 instance.
E. Use optimized Amazon Elastic Block Store (Amazon EBS) instance types.

A

A. Enable and configure enhanced networking on each EC2 instance.
C. Run the EC2 instances in a cluster placement group.

  • Improved networking provides higher performance by offloading some of the network processing to the underlying hardware. This can help reduce latency.
  • A cluster placement group is a logical grouping of instances within a single availability zone. It is designed to provide low latency communication between instances. This can be particularly beneficial for applications that require high network performance.
90
Q

762 # A financial services company wants to close two data centers and migrate more than 100 TB of data to AWS. The data has an intricate directory structure with millions of small files stored in deep hierarchies of subfolders. Most data is unstructured, and the enterprise file storage consists of SMB-based storage types from multiple vendors. The company does not want to change its applications to access data after the migration. What should a solutions architect do to meet these requirements with LESS operational overhead?

A. Use AWS Direct Connect to migrate data to Amazon S3.
B. Use AWS DataSync to migrate data to Amazon FSx for Luster.
C. Use AWS DataSync to migrate the data to Amazon FSx for Windows File Server.
D. Use AWS Direct Connect to migrate on-premises data file storage to an AWS Storage Gateway volume gateway.

A

C. Use AWS DataSync to migrate the data to Amazon FSx for Windows File Server.

AWS DataSync can be used to migrate data efficiently, and Amazon FSx for Windows File Server provides a highly available, fully managed Windows file system with support for SMB-based storage. This option allows the company to maintain compatibility of existing applications without changing the way applications access data after migration.

AWS DataSync is a data transfer service that simplifies, automates, and accelerates moving and replicating data between on-premises storage systems and AWS storage services over the internet or AWS Direct Connect. DataSync can transfer your file data, and also file system metadata such as ownership, time stamps, and access permissions. In DataSync, a location for Amazon FSx for Windows is an endpoint for an FSx for Windows File Server. You can transfer files between a location for Amazon FSx for Windows and a location for other file systems. For information, see Working with Locations in the AWS DataSync User Guide. DataSync accesses your FSx for Windows File Server using the Server Message Block (SMB) protocol.

91
Q

763 # An organization in AWS Organizations is used by a company to manage AWS accounts that contain applications. The company establishes a dedicated monitoring member account in the organization. The company wants to query and view observability data across all accounts using Amazon CloudWatch. What solution will meet these requirements?

A. Enable CloudWatch cross-account observability for the monitoring account. Deploy an AWS CloudFormation template provided by the monitoring account in each AWS account to share the data with the monitoring account.
B. Configure service control policies (SCP) to provide access to CloudWatch in the monitoring account under the organizations root organizational unit (OU).
C. Configure a new IAM user in the monitoring account. In each AWS account, configure an IAM policy to access and view CloudWatch data in the account. Attaches the new IAM policy to the new IAM user.
D. Create a new IAM user in the monitoring account. Create cross-account IAM policies in each AWS account. Attaches the IAM policies to the new IAM user.

A

A. Enable CloudWatch cross-account observability for the monitoring account. Deploy an AWS CloudFormation template provided by the monitoring account in each AWS account to share the data with the monitoring account.

  • This option involves enabling observability between CloudWatch accounts, allowing the monitoring account to access data from other accounts. Deploying an AWS CloudFormation template to each AWS account makes it easy to share observability data. This approach can work effectively to centralize monitoring across multiple accounts.
92
Q

764 # A company’s website is used to sell products to the public. The site runs on Amazon EC2 instances in an auto-scaling group behind an application load balancer (ALB). There is also an Amazon CloudFront distribution, and AWS WAF is being used to protect against SQL injection attacks. The ALB is the source of the CloudFront distribution. A recent review of security logs revealed an external malicious IP that must be blocked to access the website. What should a solutions architect do to secure the application?

A. Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address.
B. Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address.
C. Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP address.
D. Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP address.

A

B. Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address.

  • AWS WAF (Web Application Firewall) is designed to protect web applications from various attacks, including SQL injection. Since AWS WAF is already being used in this scenario, modifying its configuration to add an IP match condition is a suitable approach.
  • Modifying AWS WAF to add an IP match condition allows you to specify rules to block or allow requests based on specific IP addresses. This way, you can block access to the website of the identified malicious IP address.

NOTE: regarding option A: - Network ACLs are more concerned with controlling access to Amazon CloudFront based on IP addresses. CloudFront with WAF is how address filtering is performed with the action on the WAF.

93
Q

765 # A company establishes an organization in AWS Organizations that contains 10 AWS accounts. A solutions architect must design a solution to provide account access to several thousand employees. The company has an existing identity provider (IdP). The company wants to use the existing IdP for authentication to AWS. What solution will meet these requirements?

A. Create IAM users for employees in the required AWS accounts. Connect IAM users to the existing IdP. Configure federated authentication for IAM users.
B. Configure the AWS account root users with email addresses and passwords of the users that are synchronized from the existing IdP.
C. Configure AWS IAM Identity Center (AWS Single Sign-On). Connect the IAM Identity Center to the existing IdP. Provision users and groups from the existing IdP.
D. Use AWS Resource Access Manager (AWS RAM) to share access to AWS accounts with existing IdP users.

A

C. Configure AWS IAM Identity Center (AWS Single Sign-On). Connect the IAM Identity Center to the existing IdP. Provision users and groups from the existing IdP.

Explanation:
1. AWS IAM Identity Center (AWS Single Sign-On - SSO): - AWS SSO is a fully managed service that allows users to access multiple AWS accounts and applications using their existing corporate credentials. Simplifies user access management across all AWS accounts.
2. Connect to Existing IdP: - AWS Single Sign-On can be configured to connect to your existing Identity Provider (IdP), allowing users to sign in with their existing corporate credentials. This takes advantage of the existing authentication mechanism.
3. Provisioning Users and Groups: - AWS SSO allows you to provision users and groups from the existing IdP. This eliminates the need to manually create IAM users in each AWS account, providing a more centralized and efficient approach.

94
Q

766 # A solutions architect is designing an AWS Identity and Access Management (IAM) authorization model for a company’s AWS account. The company has designated five specific employees to have full access to AWS services and resources in the AWS account. The solutions architect has created an IAM user for each of the five designated employees and created an IAM user group. What solution will meet these requirements?

A. Attach the AdministratorAccess resource-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.
B. Attach the SystemAdministrator identity-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.
C. Attach the AdministratorAccess identity-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.
D. Attach the SystemAdministrator resource-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.

A

C. Attach the AdministratorAccess identity-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.

Explanation:
1. Administrator Access Policy: - The “Administrator Access” policy is a managed policy in AWS IAM that grants full access to AWS services and resources. It is designed to provide unrestricted access to perform any action in your AWS account.
2. Identity-based policy: - Identity-based policies are attached directly to IAM users, groups, or roles. In this case, attaching the “AdministratorAccess” policy directly to the IAM user group ensures that all users within that group inherit the permissions.
3. IAM User Group: - Creating an IAM user group allows for easy permissions management. By placing each of the five designated employee IAM users in the IAM user group, you can efficiently manage and grant full access to the specified resources.

NOTE: - A. Attach the AdministratorAccess resource-based policy to the IAM user group: - Resource-based policies are used to define permissions on resources, such as S3 buckets or Lambda functions, not for IAM user groups. The “AdministratorAccess” policy is an identity-based policy.

95
Q

767 # A company has a multi-tier payment processing application that relies on virtual machines (VMs). Communication between tiers occurs asynchronously through a third-party middleware solution that guarantees exactly-once delivery. The company needs a solution that requires the least amount of infrastructure management. The solution must guarantee exactly-once delivery for in-app messaging. What combination of actions will meet these requirements? (Choose two.)

A. Use AWS Lambda for the compute layers of the architecture.
B. Use Amazon EC2 instances for the compute layers of the architecture.
C. Use Amazon Simple Notification Service (Amazon SNS) as a messaging component between compute layers.
D. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the messaging component between the compute layers.
E. Use containers that are based on Amazon Elastic Kubernetes Service (Amazon EKS) for the compute layers in the architecture.

A

A. Use AWS Lambda for the compute layers of the architecture.
D. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the messaging component between the compute layers.

  • AWS Lambda is a serverless computing service that requires minimal infrastructure management. It automatically scales based on the number of incoming requests, and you don’t have to provision or manage servers. Lambda may be suitable for stateless and event-based processing, making it a good choice for certain types of applications.
  • Amazon SQS FIFO (First-First In-Out) queues provide message processing and orderly delivery of messages exactly once. Using SQS FIFO queues ensures that messages are processed in the order they are received and delivered exactly once. This helps maintain the integrity of the payment processing application.
96
Q

768 # A company has a nightly batch processing routine that analyzes the report files that a local file system receives daily via SFTP. The company wants to move the solution to the AWS cloud. The solution must be highly available and resilient. The solution must also minimize operational effort. Which solution meets these requirements?

A. Deploy AWS Transfer for SFTP and an Amazon Elastic File System (Amazon EFS) file system for storage. Use an Amazon EC2 instance in an auto-scaling group with a scheduled scaling policy to run the batch operation.
B. Deploy an Amazon EC2 instance running Linux and an SFTP service. Use an Amazon Elastic Block Store (Amazon EBS) volume for storage. Use an auto-scaling group with the minimum number of instances and the desired number of instances set to 1.
C. Deploy an Amazon EC2 instance running Linux and an SFTP service. Use an Amazon Elastic File System (Amazon EFS) file system for storage. Use an auto-scaling group with the minimum number of instances and the desired number of instances set to 1.
D. Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify the application to extract batch files from Amazon S3 to an Amazon EC2 instance for processing. Use an EC2 instance in an auto-scaling group with a scheduled scaling policy to run the batch operation.

A

D. Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify the application to extract batch files from Amazon S3 to an Amazon EC2 instance for processing. Use an EC2 instance in an auto-scaling group with a scheduled scaling policy to run the batch operation.

97
Q

769 # A company has users around the world accessing its HTTP-based application deployed on Amazon EC2 instances in multiple AWS Regions. The company wants to improve the availability and performance of the application. The company also wants to protect the application against common web exploits that can affect availability, compromise security or consume excessive resources. Static IP addresses are required. What should a solutions architect recommend to achieve this?

A. Put EC2 instances behind network load balancers (NLBs) in each region. Deploy AWS WAF on NLBs. Create an accelerator using AWS Global Accelerator and register NLBs as endpoints.
B. Put the EC2 instances behind application load balancers (ALBs) in each region. Deploy AWS WAF in the ALBs. Create an accelerator using AWS Global Accelerator and register ALBs as endpoints.
C. Put EC2 instances behind network load balancers (NLBs) in each region. Deploy AWS WAF on NLBs. Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to NLBs.
D. Put EC2 instances behind application load balancers (ALBs) in each region. Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to ALBs. Deploy AWS WAF to your CloudFront distribution.

A

B. Put the EC2 instances behind application load balancers (ALBs) in each region. Deploy AWS WAF in the ALBs. Create an accelerator using AWS Global Accelerator and register ALBs as endpoints.

  • ALBs are designed to route HTTP/HTTPS traffic and provide advanced features including content-based routing, making them suitable for web applications.
  • AWS WAF on ALB provides protection against common web exploits.
  • AWS Global Accelerator is used to improve availability and performance by providing a static Anycast IP address and directing traffic to optimal AWS endpoints.
98
Q

770 # A company’s data platform uses an Amazon Aurora MySQL database. The database has multiple read replicas and multiple database instances in different availability zones. Users have recently reported database errors indicating there are too many connections. The company wants to reduce failover time by 20% when a read replica is promoted to primary writer. What solution will meet this requirement?

A. Switch from Aurora to Amazon RDS with Multi-AZ cluster deployment.
B. Use Amazon RDS Proxy in front of the Aurora database.
C. Switch to Amazon DynamoDB with DynamoDB Accelerator (DAX) for read connections.
D. Switch to Amazon Redshift with relocation capability.

A

B. Use Amazon RDS Proxy in front of the Aurora database.

To reduce failover time and improve connection handling on an Amazon Aurora MySQL database, the recommended solution is: **Option B: Use Amazon RDS Proxy in front of the Aurora database. **

Explanation:
1. Amazon RDS Proxy: - Amazon RDS Proxy is a highly available, fully managed database proxy for Amazon RDS (Relational Database Service) that makes applications more scalable , more resistant to database failures and more secure. It helps manage database connections, which can be particularly beneficial in scenarios with too many connections.
2. Benefits of using Amazon RDS Proxy: - Efficient connection pooling: RDS Proxy efficiently manages connections to the database, reducing the potential for connection-related issues. - Reduced failover time: RDS Proxy can significantly reduce failover time when a read replica is promoted to the primary writer. Maintains persistent connections during failovers, minimizing impact on applications.

NOTE:Discussion of other options: - Option A (Switch from Aurora to Amazon RDS with Multi-AZ cluster deployment): This option may not address the specific need to reduce failover time, and Aurora is known for its fast failovers. Multi-AZ implementation is already a feature available in Aurora.

99
Q

771 # A company stores text files in Amazon S3. Text files include customer chat messages, date and time information, and customer personally identifiable information (PII). The company needs a solution to provide conversation samples to a third-party service provider for quality control. The external service provider needs to randomly choose sample conversations up to the most recent conversation. The company must not share the customer’s PII with the third-party service provider. The solution must scale as the number of customer conversations increases. Which solution will meet these requirements with the LESS operating overhead?

A. Create a Object Lambda access point. Create an AWS Lambda function that redacts the PII when the function reads the file. Instruct the external service provider to access the object Lambda access point.
B. Create a batch process on an Amazon EC2 instance that regularly reads all new files, redacts the files’ PII, and writes the redacted files to a different S3 bucket. Instruct the third-party service provider to access the repository that does not contain the PII.
C. Create a web application on an Amazon EC2 instance that lists the files, redacts the PII of the files, and allows the third-party service provider to download new versions of the files that have the redacted PII.
D. Create an Amazon DynamoDB table. Create an AWS Lambda function that reads only data from files that do not contain PII. Configure the Lambda function to store non-PII data in the DynamoDB table when a new file is written to Amazon S3. Grant the external service provider access to the DynamoDB table.

A

A. Create a Object Lambda access point. Create an AWS Lambda function that redacts the PII when the function reads the file. Instruct the external service provider to access the object Lambda access point.

  • Object Lambda access points allow custom processing of S3 object data before returning it to the requester. A Lambda function can be attached to the access point to dynamically redact PII from text files when they are accessed. This ensures that the third-party service provider only receives information redacted without PII.
  • AWS Lambda provides a scalable, low-overhead, serverless environment for processing.
100
Q

772 # A company is running a legacy system on an Amazon EC2 instance. The application code cannot be modified and the system cannot run in more than one instance. A solutions architect must design a resilient solution that can improve system recovery time. What should the solutions architect recommend to meet these requirements?

A. Enable termination protection for the EC2 instance.
B. Configure the EC2 instance for Multi-AZ deployment.
C. Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.
D. Start the EC2 instance with two Amazon Elastic Block Store (Amazon EBS) volumes that use RAID configurations for storage redundancy.

A

C. Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.

This option involves using Amazon CloudWatch to monitor the health of the EC2 instance. A CloudWatch alarm can be configured to detect failures or problems and trigger an automated recovery action. The recovery action could be implemented using AWS Lambda functions or other automation tools. While this option does not modify the application code, it introduces operational automation for system recovery.