Q101-150 Flashcards

1
Q

A company has an Amazon EC2 instance running on a private subnet that needs to access a public website to download patches and updates. The company does not want external websites to see the EC2 instance IP address or initiate connections to it.
How can a solutions architect achieve this objective?

A. Create a site-to-site VPN connection between the private subnet and the network in which the public site is deployed.

B. Create a NAT gateway in a public subnet. Route outbound traffic from the private subnet through the NAT gateway.

C. Create a network ACL for the private subnet where the EC2 instance deployed only allows access from the IP address range of the public website.

D. Create a security group that only allows connections from the IP address range of the public website. Attach the security group to the EC2 instance.

A

A. Creating a site-to-site VPN connection would establish a secure connection between the private subnet and the network where the public website is deployed. While this would provide secure communication, it doesn’t hide the EC2 instance’s IP address or prevent incoming connections from external websites.

C. Creating a network ACL (Access Control List) that allows access only from the IP address range of the public website is a partial solution. While it restricts incoming traffic to your EC2 instance, it does not hide the EC2 instance’s IP address or control outbound connections.

D. Creating a security group that only allows connections from the IP address range of the public website would also restrict incoming traffic, but it does not address the requirement to hide the EC2 instance’s IP address or control outbound connections.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days. The company’s network bandwidth is limited to 15 Mbps and cannot exceed 70% utilization. What should a solutions architect do to meet these requirements?

A. Use AWS Snowball.

B. Use AWS DataSync.

C. Use a secure VPN connection.

D. Use Amazon S3 Transfer Acceleration.

A

B. Use AWS DataSync:
AWS DataSync is a service designed for transferring data between on-premises storage and Amazon S3 or Amazon EFS. While it can be used for data transfer, it may not be the best choice for transferring 20 TB of data within a 30-day timeframe, given the limited network bandwidth of 15 Mbps. AWS DataSync relies on the available network bandwidth, and with the given constraints, it would likely take significantly longer than 30 days to complete the transfer.

C. Use a secure VPN connection:
A secure VPN connection is used to establish a secure, encrypted connection between your on-premises network and your AWS environment. While it can be useful for secure communication, it doesn’t address the limitations of the available network bandwidth (15 Mbps). The data transfer rate would still be limited by the available bandwidth, making it impractical to transfer 20 TB of data within 30 days in this scenario.

D. Use Amazon S3 Transfer Acceleration:
Amazon S3 Transfer Acceleration is a feature of Amazon S3 that is designed to speed up uploads to Amazon S3 by leveraging Amazon CloudFront’s globally distributed edge locations. While it can help improve upload speed, it doesn’t address the fundamental limitation of the available network bandwidth (15 Mbps). It can be useful for improving performance when transferring data over the internet, but it may not be sufficient to meet the 20 TB data transfer requirement within the specified time frame, especially if the network bandwidth is a bottleneck.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company has a website running on Amazon EC2 instances across two Availability Zones. The company is expecting spikes in traffic on specific holidays, and wants to provide a consistent user experience. How can a solutions architect meet this requirement?

A. Use step scaling.

B. Use simple scaling.

C. Use lifecycle hooks.

D. Use scheduled scaling.

A

A. Step scaling is a dynamic scaling method that adjusts the capacity of your Auto Scaling group based on a set of policies and metrics. It is typically used for responding to changes in traffic patterns that are not predictable, such as sudden increases or decreases in load. It may not be the best choice for handling planned, predictable spikes in traffic on specific holidays.

B. Simple scaling, similar to step scaling, is used for dynamic scaling based on specified policies and metrics. It is not specifically designed for handling scheduled, predictable traffic spikes.

C. Lifecycle hooks are used to perform custom actions when instances are launched or terminated within an Auto Scaling group. While they can be helpful for customizing instance launches and terminations, they are not directly related to addressing the requirement of handling traffic spikes on specific holidays.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An ecommerce company is running a multi-tier application on AWS. The front-end and backend tiers both run on Amazon EC2, and the database runs on Amazon
RDS for MySQL. The backend tier communicates with the RDS instance. There are frequent calls to return identical datasets from the database that are causing performance slowdowns.
Which action should be taken to improve the performance of the backend?

A. Implement Amazon SNS to store the database calls.

B. Implement Amazon ElastiCache to cache the large datasets.

C. Implement an RDS for MySQL read replica to cache database calls.

D. Implement Amazon Kinesis Data Firehose to stream the calls to the database.

A

A. Implementing Amazon SNS (Simple Notification Service) is used for communication between distributed components and for sending notifications. It is not designed for caching data or improving the performance of database calls.

C. Implementing an RDS for MySQL read replica can help distribute read traffic and improve read scalability, but it doesn’t specifically address the issue of caching identical datasets to reduce database calls.

D. Amazon Kinesis Data Firehose is used for streaming data to other AWS services, such as data lakes and data analytics. It’s not designed for caching or improving the performance of database calls.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company has an on-premises data center that is running out of storage capacity. The company wants to migrate its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow for immediate retrieval of data at no additional cost. How can these requirements be met?

A. Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the workload.

B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally.

C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.

D. Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.

A

A. Deploying Amazon S3 Glacier Vault with expedited retrieval and provisioned retrieval capacity may provide fast access to archived data but is not suitable for immediate retrieval of all data, and it could incur additional costs.

C. Deploying AWS Storage Gateway with stored volumes is primarily designed for data backup and recovery, not for immediate retrieval of data without additional cost.

D. Deploying AWS Direct Connect would establish a dedicated network connection to AWS but doesn’t directly address the storage and data retrieval requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company is processing data on a daily basis. The results of the operations are stored in an Amazon S3 bucket, analyzed daily for one week, and then must remain immediately accessible for occasional analysis.
What is the MOST cost-effective storage solution alternative to the current configuration?

A. Configure a lifecycle policy to delete the objects after 30 days.

B. Configure a lifecycle policy to transition the objects to Amazon S3 Glacier after 30 days.

C. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.

D. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.

A

A. Configuring a lifecycle policy to delete the objects after 30 days would not meet the requirement of keeping the data immediately accessible for occasional analysis.

B. Configuring a lifecycle policy to transition the objects to Amazon S3 Glacier after 30 days would lower storage costs but would not keep the data immediately accessible without additional retrieval time and cost.

D. Configuring a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days might offer slightly lower storage costs compared to S3 Standard-IA, but it comes with the risk of data loss in the event of a data center failure in the availability zone. Since the requirement is to keep the data immediately accessible, S3 One Zone-IA may not be the best choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company delivers files in Amazon S3 to certain users who do not have AWS credentials. These users must be given access for a limited time. What should a solutions architect do to securely meet these requirements?

A. Enable public access on an Amazon S3 bucket.

B. Generate a presigned URL to share with the users.

C. Encrypt files using AWS KMS and provide keys to the users.

D. Create and assign IAM roles that will grant GetObject permissions to the users.

A

A. Enabling public access on an Amazon S3 bucket is not a secure solution, as it would make the files in the bucket accessible to anyone on the internet, potentially exposing sensitive data.

C. Encrypting files using AWS KMS and providing keys to users is a valid approach for securing data at rest, but it doesn’t address the requirement of granting temporary access for a limited time. It’s more suitable for securing data within the AWS environment.

D. Creating and assigning IAM roles with GetObject permissions is meant for granting access to AWS resources to users within the AWS environment who have AWS credentials. It’s not designed for providing temporary, external access to files in an S3 bucket for users without AWS credentials.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company wants to run a hybrid workload for data processing. The data needs to be accessed by on-premises applications for local data processing using an
NFS protocol, and must also be accessible from the AWS Cloud for further analytics and batch processing.
Which solution will meet these requirements?

A. Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud.

B. Use an AWS Storage Gateway tape gateway to copy the backup of the local data to AWS, then perform analytics on this data in the AWS cloud.

C. Use an AWS Storage Gateway volume gateway in a stored volume configuration to regularly take snapshots of the local data, then copy the data to AWS.

D. Use an AWS Storage Gateway volume gateway in a cached volume configuration to back up all the local storage in the AWS cloud, then perform analytics on this data in the cloud.

A

B. Using an AWS Storage Gateway tape gateway is primarily designed for long-term archiving and backup, not for providing immediate access to data for local data processing or analytics.

C. Using an AWS Storage Gateway volume gateway in a stored volume configuration is suitable for data backup and recovery but does not provide real-time access for local data processing and analytics.

D. Using an AWS Storage Gateway volume gateway in a cached volume configuration is designed for primary storage and backup of data in the AWS Cloud. While it can back up data from on-premises storage, it is not the best solution for real-time access and processing of data on-premises.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company plans to store sensitive user data on Amazon S3. Internal security compliance requirement mandate encryption of data before sending it to Amazon S3.
What should a solutions architect recommend to satisfy these requirements?

A. Server-side encryption with customer-provided encryption keys

B. Client-side encryption with Amazon S3 managed encryption keys

C. Server-side encryption with keys stored in AWS key Management Service (AWS KMS)

D. Client-side encryption with a master key stored in AWS Key Management Service (AWS KMS)

A

A. Server-side encryption with customer-provided encryption keys (SSE-C) involves providing your own encryption keys, which means you need to manage those keys yourself. While this option is valid, it might be more complex to implement and manage for compliance purposes.

B. Client-side encryption with Amazon S3 managed encryption keys (SSE-S3) involves managing encryption on the client side but uses S3’s own managed keys for encryption. It does not allow you to use your own keys and might not fully meet the compliance requirements.

C. Server-side encryption with keys stored in AWS KMS (SSE-KMS) is a good option for server-side encryption, but it may not satisfy the requirement to encrypt data before sending it to S3. SSE-KMS is typically used for encrypting data at rest within S3, not for encrypting data before it’s uploaded.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A solutions architect is moving the static content from a public website hosted on Amazon EC2 instances to an Amazon S3 bucket. An Amazon CloudFront distribution will be used to deliver the static assets. The security group used by the EC2 instances restricts access to a limited set of IP ranges. Access to the static content should be similarly restricted.
Which combination of steps will meet these requirements? (Choose two.)

A. Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects.

B. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution.

C. Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the CloudFront distribution.

D. Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the S3 bucket hosting the static content.

E. Create a new IAM role and associate the role with the distribution. Change the permissions either on the S3 bucket or on the files within the S3 bucket so that only the newly created IAM role has read and download permissions.

A

C. Creating a new security group and associating it with CloudFront is not a valid approach because security groups are used for EC2 instances, not CloudFront distributions.

D. Creating a new security group and associating it with the S3 bucket is also not a valid approach because security groups are not used to restrict access to S3 buckets directly. They are used for EC2 instances.

E. Creating a new IAM role and changing permissions on the S3 bucket or files within the bucket is not necessary for achieving IP-based restrictions. IAM roles are used for granting permissions to AWS services, not for IP filtering.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company is investigating potential solutions that would collect, process, and store users’ service usage data. The business objective is to create an analytics capability that will enable the company to gather operational insights quickly using standard SQL queries. The solution should be highly available and ensure
Atomicity, Consistency, Isolation, and Durability (ACID) compliance in the data tier.
Which solution should a solutions architect recommend?

A. Use an Amazon Timestream database.

B. Use an Amazon Neptune database in a Multi-AZ design.

C. Use a fully managed Amazon RDS for MySQL database in a Multi-AZ design.

D. Deploy PostgreSQL on an Amazon EC2 instance that uses Amazon Elastic Block Store (Amazon EBS) Throughput Optimized HDD (st1) storage.

A

A. Amazon Timestream is a purpose-built, serverless time-series database for managing and querying time-series data. While it’s excellent for time-series data, it may not be the best fit for standard SQL queries and traditional relational database requirements such as ACID compliance.

B. Amazon Neptune is a graph database and may not be the ideal choice for handling service usage data and standard SQL queries with ACID compliance. Multi-AZ is more about high availability and not necessarily about ACID compliance.

D. Deploying PostgreSQL on an Amazon EC2 instance using Amazon EBS storage gives you flexibility but involves more operational overhead in terms of management and scaling. It doesn’t provide the same level of managed service as Amazon RDS, and achieving ACID compliance may require more effort.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company recently launched its website to serve content to its global user base. The company wants to store and accelerate the delivery of static content to its users by leveraging Amazon CloudFront with an Amazon EC2 instance attached as its origin.
How should a solutions architect optimize high availability for the application?

A. Use Lambda@Edge for CloudFront.

B. Use Amazon S3 Transfer Acceleration for CloudFront.

C. Configure another EC2 instance in a different Availability Zone as part of the origin group.

D. Configure another EC2 instance as part of the origin server cluster in the same Availability Zone.

A

A. Using Lambda@Edge for CloudFront is primarily for running serverless functions at the edge locations for request/response customization and does not directly address high availability for the origin servers.

B. Amazon S3 Transfer Acceleration is a feature of Amazon S3 and not directly related to optimizing high availability for the application’s EC2 instances.

D. Configuring another EC2 instance as part of the origin server cluster in the same Availability Zone might help with load balancing, but it wouldn’t provide the same level of high availability as using multiple Availability Zones. If the entire Availability Zone goes down, the application would be impacted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An application running on an Amazon EC2 instance in VPC-A needs to access files in another EC2 instance in VPC-B. Both are in separate AWS accounts. The network administrator needs to design a solution to configure secure access to EC2 instance in VPC-B from VPC-A. The connectivity should not have a single point of failure or bandwidth concerns.
Which solution will meet these requirements?

A. Set up a VPC peering connection between VPC-A and VPC-B.

B. Set up VPC gateway endpoints for the EC2 instance running in VPC-B.

C. Attach a virtual private gateway to VPC-B and set up routing from VPC-A.

D. Create a private virtual interface (VIF) for the EC2 instance running in VPC-B and add appropriate routes from VPC-A.

A

A. Setting up a VPC peering connection between VPC-A and VPC-B can enable communication between the two VPCs, but it doesn’t address the requirement of secure access between separate AWS accounts. Additionally, it does not guarantee redundancy or eliminate bandwidth concerns.

B. Setting up VPC gateway endpoints is used for accessing AWS services like S3 or DynamoDB without the need to traverse the internet. It doesn’t provide direct access between EC2 instances in different VPCs, especially when they are in separate AWS accounts.

C. Attaching a virtual private gateway to VPC-B is part of a VPN connection and is typically used for connecting on-premises networks to a VPC. It doesn’t provide a direct, secure way to access EC2 instances in VPC-B from VPC-A in separate AWS accounts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company currently stores symmetric encryption keys in a hardware security module (HSM). A solutions architect must design a solution to migrate key management to AWS. The solution should allow for key rotation and support the use of customer provided keys.
Where should the key material be stored to meet these requirements?

A. Amazon S3

B. AWS Secrets Manager

C. AWS Systems Manager Parameter store

D. AWS Key Management Service (AWS KMS)

A

A. Amazon S3 is an object storage service and not designed for managing encryption keys directly. While you can encrypt data stored in S3 using AWS KMS, S3 itself is not a key management solution.

B. AWS Secrets Manager is designed for managing and rotating secrets, such as database passwords and API keys, but it is not the primary service for managing encryption keys.

C. AWS Systems Manager Parameter Store is used for storing configuration data and parameters. While it can store sensitive information, it is not a dedicated key management solution for encryption keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A recent analysis of a company’s IT expenses highlights the need to reduce backup costs. The company’s chief information officer wants to simplify the on-premises backup infrastructure and reduce costs by eliminating the use of physical backup tapes. The company must preserve the existing investment in the on- premises backup applications and workflows.
What should a solutions architect recommend?

A. Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.

B. Set up an Amazon EFS file system that connects with the backup applications using the NFS interface.

C. Set up an Amazon EFS file system that connects with the backup applications using the iSCSI interface.

D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.

A

A. Setting up AWS Storage Gateway with the NFS interface (file gateway) would provide file-based access to storage in AWS and might not be the best fit for backup applications that are typically designed to work with tape-based interfaces or block-based storage.

B. Setting up an Amazon EFS file system with NFS access would provide a scalable file system in AWS but may not seamlessly integrate with existing backup applications and workflows designed for tape-based backups.

C. Setting up an Amazon EFS file system with iSCSI access is not a typical configuration, and it does not provide the same compatibility and integration as the iSCSI-VTL interface for backup applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company hosts an application on an Amazon EC2 instance that requires a maximum of 200 GB storage space. The application is used infrequently, with peaks during mornings and evenings. Disk I/O varies, but peaks at 3,000 IOPS. The chief financial officer of the company is concerned about costs and has asked a solutions architect to recommend the most cost-effective storage option that does not sacrifice performance.
Which solution should the solutions architect recommend?

A. Amazon Elastic Block Store (Amazon EBS) Cold HDD (sc1)

B. Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2)

C. Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1)

D. Amazon Elastic Block Store (Amazon EBS) Throughput Optimized HDD (st1)

A

A. Amazon EBS Cold HDD (sc1) is a cost-effective option with lower performance and is more suitable for infrequently accessed data that doesn’t require high IOPS. It may not meet the performance requirements during peak periods when disk I/O reaches 3,000 IOPS.

C. Amazon EBS Provisioned IOPS SSD (io1) allows you to provision a specific number of IOPS, but it may be overkill and more costly for this scenario if you provision the maximum required 3,000 IOPS continuously.

D. Amazon EBS Throughput Optimized HDD (st1) is optimized for sequential workloads and is not designed for high IOPS requirements. It may not meet the performance needs during peak I/O periods.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A company’s application hosted on Amazon EC2 instances needs to access an Amazon S3 bucket. Due to data sensitivity, traffic cannot traverse the internet.
How should a solutions architect configure access?

A. Create a private hosted zone using Amazon Route 53.

B. Configure a VPC gateway endpoint for Amazon S3 in the VPC.

C. Configure AWS PrivateLink between the EC2 instance and the S3 bucket.

D. Set up a site-to-site VPN connection between the VPC and the S3 bucket.

A

Option A, creating a private hosted zone using Amazon Route 53, is related to DNS resolution but does not provide a direct mechanism for accessing Amazon S3 without internet traffic.

Option C, configuring AWS PrivateLink, is a valid solution for private access to AWS services but typically involves setting up VPC endpoints for various AWS services within your VPC. It may be more complex compared to option B if your primary goal is to secure access to Amazon S3 specifically.

Option D, setting up a site-to-site VPN connection between the VPC and the S3 bucket, is not the most efficient solution for this scenario. It’s more complex, involves more overhead, and doesn’t provide the same level of direct, private access as a VPC gateway endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company has two applications it wants to migrate to AWS. Both applications process a large set of files by accessing the same files at the same time. Both applications need to read the files with low latency.
Which architecture should a solutions architect recommend for this situation?

A. Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an instance store volume to store the data.

B. Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an Amazon Elastic Block Store (Amazon EBS) volume to store the data.

C. Configure one memory optimized Amazon EC2 instance to run both applications simultaneously. Create an Amazon Elastic Block Store (Amazon EBS) volume with Provisioned IOPS to store the data.

D. Configure two Amazon EC2 instances to run both applications. Configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data.

A

A. Using AWS Lambda functions for running the applications is not suitable for applications that need to process a large set of files simultaneously with low latency. EC2 instances would be better suited for this purpose.

B. Similar to option A, using AWS Lambda functions with an EC2 instance and Amazon EBS volume is not the best choice for processing large sets of files simultaneously with low latency.

C. Configuring one EC2 instance to run both applications may lead to resource contention and may not provide the desired low-latency access to the files, especially if both applications need to access the files at the same time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

An ecommerce company has noticed performance degradation of its Amazon RDS based web application. The performance degradation is attributed to an increase in the number of read-only SQL queries triggered by business analysts. A solutions architect needs to solve the problem with minimal changes to the existing web application.
What should the solutions architect recommend?

A. Export the data to Amazon DynamoDB and have the business analysts run their queries.

B. Load the data into Amazon ElastiCache and have the business analysts run their queries.

C. Create a read replica of the primary database and have the business analysts run their queries.

D. Copy the data into an Amazon Redshift cluster and have the business analysts run their queries.

A

A. Exporting the data to Amazon DynamoDB would require significant changes to the application to use DynamoDB as a data store, and it may not be the most efficient solution for running complex SQL queries.

B. Loading the data into Amazon ElastiCache (e.g., Redis or Memcached) is suitable for caching frequently accessed data but does not provide the capability to run complex SQL queries that business analysts need.

D. Copying the data into an Amazon Redshift cluster would involve significant changes to the application to use Redshift as a data warehouse. It is a suitable choice for analytics workloads but may not be the most straightforward solution in this context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS database. Compliance regulations mandate that all personally identifiable information (PII) be encrypted at rest.
Which solution should a solutions architect recommend to meet this requirement with the LEAST amount of changes to the infrastructure?

A. Deploy AWS Certificate Manager to generate certificates. Use the certificates to encrypt the database volume.

B. Deploy AWS CloudHSM, generate encryption keys, and use the keys to encrypt database volumes.

C. Configure SSL encryption using AWS Key Management Service (AWS KMS) to encrypt database volumes.

D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.

A

A. Deploying AWS Certificate Manager and using certificates is primarily for securing data in transit, not for encrypting data at rest. It doesn’t address the requirement for encrypting PII data stored on Amazon EBS and RDS volumes.

B. Deploying AWS CloudHSM and generating encryption keys is a valid option for encryption, but it involves significant changes to the infrastructure, which goes against the requirement to make the least amount of changes.

C. Configuring SSL encryption with AWS KMS for encrypting data in transit doesn’t directly address the requirement for encrypting data at rest on Amazon EBS and RDS volumes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A company running an on-premises application is migrating the application to AWS to increase its elasticity and availability. The current architecture uses a
Microsoft SQL Server database with heavy read activity. The company wants to explore alternate database options and migrate database engines, if needed.
Every 4 hours, the development team does a full copy of the production database to populate a test database. During this period, users experience latency.
What should a solutions architect recommend as replacement database?

A. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore from mysqldump for the test database.

B. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore snapshots from Amazon RDS for the test database.

C. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas, and use the standby instance for the test database.

D. Use Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore snapshots from RDS for the test database.

A

A. Amazon Aurora is a good option, but it is compatible with MySQL and PostgreSQL, not Microsoft SQL Server. Therefore, it may not be suitable if the current database is Microsoft SQL Server.

B. Amazon Aurora is not compatible with Microsoft SQL Server, and restoring snapshots from Amazon RDS (which supports SQL Server) would be the more appropriate approach if sticking with SQL Server.

C. Amazon RDS for MySQL is not compatible with Microsoft SQL Server, and using a standby instance for the test database would not address the requirement to minimize latency during the periodic full copy of the production database.

22
Q

A company has enabled AWS CloudTrail logs to deliver log files to an Amazon S3 bucket for each of its developer accounts. The company has created a central
AWS account for streamlining management and audit reviews. An internal auditor needs to access the CloudTrail logs, yet access needs to be restricted for all developer account users. The solution must be secure and optimized.
How should a solutions architect meet these requirements?

A. Configure an AWS Lambda function in each developer account to copy the log files to the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket.

B. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM user in the central account for the auditor. Attach an IAM policy providing full permissions to the bucket.

C. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket.

D. Configure an AWS Lambda function in the central account to copy the log files from the S3 bucket in each developer account. Create an IAM user in the central account for the auditor. Attach an IAM policy providing full permissions to the bucket.

A

A. While this option involves copying the log files using Lambda functions, it does not provide read-only permissions for the auditor in a secure and optimized manner. It introduces complexity and may not be as efficient as using CloudTrail’s native log delivery to an S3 bucket.

B. Creating an IAM user with full permissions in the central account for the auditor is not a best practice, as it could potentially compromise security by providing more access than necessary.

D. While this option involves Lambda functions in the central account, it is not necessary to copy the log files. CloudTrail can natively deliver logs to an S3 bucket in the central account, and IAM roles with appropriate permissions for the auditor can be used for secure access.

23
Q

A company has several business systems that require access to data stored in a file share. The business systems will access the file share using the Server
Message Block (SMB) protocol. The file share solution should be accessible from both of the company’s legacy on-premises environments and with AWS.
Which services meet the business requirements? (Choose two.)

A. Amazon Elastic Block Store (Amazon EBS)

B. Amazon Elastic File System (Amazon EFS)

C. Amazon FSx for Windows

D. Amazon S3

E. AWS Storage Gateway file gateway

A

While options A (Amazon EBS) and D (Amazon S3) are storage services, they are not typically used as shared file systems accessible via SMB. Amazon EBS is block storage, and Amazon S3 is object storage, so they don’t natively support SMB file sharing.

Option B, Amazon Elastic File System (Amazon EFS), is not listed as one of the suitable options because while Amazon EFS is a managed file storage service in AWS, it primarily supports the Network File System (NFS) protocol and not the Server Message Block (SMB) protocol.

24
Q

A company is using Amazon EC2 to run its big data analytics workloads. These variable workloads run each night, and it is critical they finish by the start of business the following day. A solutions architect has been tasked with designing the MOST cost-effective solution.
Which solution will accomplish this?

A. Spot Fleet

B. Spot Instances

C. Reserved Instances

D. On-Demand Instances

A

Spot Fleet (Option A) is a collection of Spot Instances, and while it can be cost-effective, it does not provide the same level of reliability and assurance as On-Demand Instances.

While Spot Instances (Option B) can be significantly cheaper than On-Demand Instances, they come with the risk of being interrupted if the Spot price exceeds your bid price. This risk is not suitable for workloads that require strict adherence to deadlines.

Reserved Instances (Option C) provide cost savings for steady-state workloads with predictable usage but do not offer the flexibility and immediate availability of On-Demand Instances.

25
Q

A company has a Microsoft Windows-based application that must be migrated to AWS. This application requires the use of a shared Windows file system attached to multiple Amazon EC2 Windows instances.
What should a solutions architect do to accomplish this?

A. Configure a volume using Amazon Elastic File System (Amazon EFS). Mount the EFS volume to each Windows instance.

B. Configure AWS Storage Gateway in Volume Gateway mode. Mount the volume to each Windows instance.

C. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx volume to each Windows instance.

D. Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required size. Attach each EC2 instance to the volume. Mount the file system within the volume to each Windows instance.

A

A. Amazon Elastic File System (Amazon EFS) is a managed file storage service that supports NFS for Unix-like systems, but it does not natively support the Server Message Block (SMB) protocol used by Windows. While you can set up third-party software to make it work with Windows instances, it’s not the most straightforward solution for Windows-based applications.

B. AWS Storage Gateway in Volume Gateway mode is primarily used for block storage and may not provide the seamless shared file system experience required for Windows instances.

D. Amazon Elastic Block Store (Amazon EBS) volumes are block-level storage and do not provide native file system functionality for shared access among multiple instances. You would need additional configurations to achieve shared file access, which can be complex.

26
Q

A company has created an isolated backup of its environment in another Region. The application is running in warm standby mode and is fronted by an
Application Load Balancer (ALB). The current failover process is manual and requires updating a DNS alias record to point to the secondary ALB in another
Region.
What should a solutions architect do to automate the failover process?

A. Enable an ALB health check

B. Enable an Amazon Route 53 health check.

C. Crate an CNAME record on Amazon Route 53 pointing to the ALB endpoint.

D. Create conditional forwarding rules on Amazon Route 53 pointing to an internal BIND DNS server.

A

A. Enabling an ALB health check would allow the ALB to perform health checks on its targets, but it does not address the cross-Region failover scenario or automate the failover process.

C. Creating a CNAME record on Amazon Route 53 pointing to the ALB endpoint is useful for DNS resolution but does not provide automated failover capabilities. You would still need a health check mechanism to determine when to switch the CNAME to the secondary ALB.

D. Creating conditional forwarding rules on Amazon Route 53 pointing to an internal BIND DNS server is unrelated to this scenario and does not address the failover automation between AWS Regions.

27
Q

A company has a mobile chat application with a data store based in Amazon DynamoDB. Users would like new messages to be read with as little latency as possible. A solutions architect needs to design an optimal solution that requires minimal application changes.
Which method should the solutions architect select?

A. Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the DAX endpoint.

B. Add DynamoDB read replicas to handle the increased read load. Update the application to point to the read endpoint for the read replicas.

C. Double the number of read capacity units for the new messages table in DynamoDB. Continue to use the existing DynamoDB endpoint.

D. Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to point to the Redis cache endpoint instead of DynamoDB.

A

B. Adding DynamoDB read replicas can help distribute read load and improve scalability, but it may not provide the lowest latency for reads, especially if you have a requirement for minimal latency. Application code changes are also required to point to the read endpoint for read replicas.

C. Doubling the number of read capacity units for the DynamoDB table can increase read capacity but may not necessarily reduce read latency. It also increases cost, and you may still face latency issues if the data is not cached.

D. Adding Amazon ElastiCache for Redis can improve read latency, but it requires significant code changes to point to the Redis cache endpoint instead of DynamoDB. This option introduces complexity and may not meet the requirement for minimal application changes.

28
Q

A company is creating an architecture for a mobile app that requires minimal latency for its users. The company’s architecture consists of Amazon EC2 instances behind an Application Load Balancer running in an Auto Scaling group. The EC2 instances connect to Amazon RDS. Application beta testing showed there was a slowdown when reading the data. However, the metrics indicate that the EC2 instances do not cross any CPU utilization thresholds.
How can this issue be addressed?

A. Reduce the threshold for CPU utilization in the Auto Scaling group.

B. Replace the Application Load Balancer with a Network Load Balancer.

C. Add read replicas for the RDS instances and direct read traffic to the replica.

D. Add Multi-AZ support to the RDS instances and direct read traffic to the new EC2 instance.

A

A. Reducing the threshold for CPU utilization in the Auto Scaling group will not directly address the issue of slowdown when reading data. The CPU utilization may not be the bottleneck in this scenario.

B. Replacing the Application Load Balancer with a Network Load Balancer may provide benefits in specific scenarios, but it won’t necessarily address the performance issue related to RDS read latency.

D. Adding Multi-AZ support to the RDS instances primarily provides high availability and redundancy but may not directly improve read performance.

29
Q

A company has implemented one of its microservices on AWS Lambda that accesses an Amazon DynamoDB table named Books. A solutions architect is designing an IAM policy to be attached to the Lambda function’s IAM role, giving it access to put, update, and delete items in the Books table. The IAM policy must prevent function from performing any other actions on the Books table or any other.
Which IAM policy would fulfill these needs and provide the LEAST privileged access?

A.
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “PutUpdateDeleteOnBooks”,
“Effect”: “Allow”,
“Action”: {
“dynamodb: PutItem”,
“dynamodb: UpdateItem”,
“dynamodb: DeleteItem”
},
“Resource”: “arn:aws:dynamodb:us-west-2:123456789012:table/Books”
}
]
}

B.
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “PutUpdateDeleteOnBooks”,
“Effect”: “Allow”,
“Action”: {
“dynamodb: PutItem”,
“dynamodb: UpdateItem”,
“dynamodb: DeleteItem”
},
“Resource”: “arn:aws:dynamodb:us-west-2:123456789012:table/*”
}
]
}

C.
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “PutUpdateDeleteOnBooks”,
“Effect”: “Allow”,
“Action”: “dynamodb:*”,
“Resource”: “arn:aws:dynamodb:us-west-2:123456789012:table/Books”
}
]
}

D.
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “PutUpdateDeleteOnBooks”,
“Effect”: “Allow”,
“Action”: “dynamodb:”,
“Resource”: “arn:aws:dynamodb:us-west-2:123456789012:table/Books”
},
{
“Sid”: “PutUpdateDeleteOnBooks”,
“Effect”: “Deny”,
“Action”: “dynamodb:
:*”,
“Resource”: “arn:aws:dynamodb:us-west-2:123456789012:table/Books”
}
]
}

A

A.

30
Q

A company hosts its website on Amazon S3. The website serves petabytes of outbound traffic monthly, which accounts for most of the company’s AWS costs.
What should a solutions architect do to reduce costs?

A. Configure Amazon CloudFront with the existing website as the origin.

B. Move the website to Amazon EC2 with Amazon Elastic Block Store (Amazon EBS) volumes for storage.

C. Use AWS Global Accelerator and specify the existing website as the endpoint.

D. Rearchitect the website to run on a combination of Amazon API Gateway and AWS Lambda.

A

B. Moving the website to Amazon EC2 with Amazon Elastic Block Store (Amazon EBS) volumes would likely increase costs due to the need to manage and provision EC2 instances and storage volumes. It may also introduce additional complexity.

C. AWS Global Accelerator is used to improve the availability and performance of applications by directing traffic to optimal AWS endpoints across multiple AWS regions. While it can enhance availability and performance, it doesn’t directly address the cost of outbound data transfer.

D. Rearchitecting the website to run on Amazon API Gateway and AWS Lambda may provide other benefits but would involve significant changes and may not directly reduce outbound data transfer costs.

31
Q

A company runs a website on Amazon EC2 instances behind an ELB Application Load Balancer. Amazon Route 53 is used for the DNS. The company wants to set up a backup website with a message including a phone number and email address that users can reach if the primary website is down.
How should the company deploy this solution?

A. Use Amazon S3 website hosting for the backup website and Route 53 failover routing policy.

B. Use Amazon S3 website hosting for the backup website and Route 53 latency routing policy.

C. Deploy the application in another AWS Region and use ELB health checks for failover routing.

D. Deploy the application in another AWS Region and use server-side redirection on the primary website.

A

Option B (using Route 53 latency routing policy) is not the best fit for this scenario because latency routing is typically used to route traffic to the endpoint with the lowest latency, not for failover purposes.

Option C (deploying the application in another AWS Region and using ELB health checks) would work but may involve more complexity and higher costs compared to the S3-based solution.

Option D (using server-side redirection on the primary website) is not ideal for failover scenarios because if the primary website is down, the redirection may not work as intended. It’s better to have a separate, independent backup solution like the one described in option A.

32
Q

A media company is evaluating the possibility of moving its systems to the AWS Cloud. The company needs at least 10 TB of storage with the maximum possible
I/O performance for video processing, 300 TB of very durable storage for storing media content, and 900 TB of storage to meet requirements for archival media that is not in use anymore.
Which set of services should a solutions architect recommend to meet these requirements?

A. Amazon Elastic Block Store (Amazon EBS) for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage

B. Amazon Elastic Block Store (Amazon EBS) for maximum performance, Amazon Elastic File System (Amazon EFS) for durable data storage, and Amazon S3 Glacier for archival storage

C. Amazon EC2 instance store for maximum performance, Amazon Elastic File System (Amazon EFS) for durable data storage, and Amazon S3 for archival storage

D. Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage

A

Option B suggests using Amazon Elastic File System (Amazon EFS) for durable data storage. While Amazon EFS is a managed file storage service with durability, it may not provide the same level of performance as Amazon EBS for video processing workloads that require high I/O performance.

Option C suggests using Amazon EC2 instance store for maximum performance. However, EC2 instance store volumes are ephemeral and not suitable for storing data that needs to be retained persistently.

Option D suggests using Amazon EC2 instance store for maximum performance, which may be suitable for certain temporary processing tasks but doesn’t provide durable storage for media content and archival requirements.

33
Q

A company uses Amazon S3 as its object storage solution. The company has thousands of S3 buckets it uses to store data. Some of the S3 buckets have data that is accessed less frequently than others. A solutions architect found that lifecycle policies are not consistently implemented or are implemented partially, resulting in data being stored in high-cost storage.
Which solution will lower costs without compromising the availability of objects?

A. Use S3 ACLs.

B. Use Amazon Elastic Block Store (Amazon EBS) automated snapshots.

C. Use S3 Intelligent-Tiering storage.

D. Use S3 One Zone-Infrequent Access (S3 One Zone-IA).

A

A. Use S3 ACLs:
Amazon S3 ACLs (Access Control Lists) are primarily used for controlling access to objects within buckets. They are not designed for cost optimization or automated management of storage classes based on access patterns. Using ACLs won’t help you automatically transition objects between different storage classes to reduce costs. It’s more focused on managing object-level permissions.

B. Use Amazon Elastic Block Store (Amazon EBS) automated snapshots:
Amazon EBS is block storage used for EC2 instances and is not related to Amazon S3 object storage. EBS automated snapshots are used for creating point-in-time backups of EBS volumes, and they are not applicable to S3 storage.

D. Use S3 One Zone-Infrequent Access (S3 One Zone-IA):
S3 One Zone-IA is a storage class designed for objects that can be recreated if lost. It stores data in a single Availability Zone, which makes it less durable than the standard S3 storage classes. While it is cost-effective, it may not be suitable for data that requires high durability or redundancy, especially for critical data or data that can’t be easily recreated. Using it for all objects may not be the best choice if data durability and availability are important considerations.

34
Q

An application is running on Amazon EC2 instances. Sensitive information required for the application is stored in an Amazon S3 bucket. The bucket needs to be protected from internet access while only allowing services within the VPC access to the bucket.
Which combination of actions should solutions archived take to accomplish this? (Choose two.)

A. Create a VPC endpoint for Amazon S3.

B. Enable server access logging on the bucket.

C. Apply a bucket policy to restrict access to the S3 endpoint.

D. Add an S3 ACL to the bucket that has sensitive information.

E. Restrict users using the IAM policy to use the specific bucket.

A

B. Enabling server access logging on the bucket: Server access logging helps you track requests made to the S3 bucket but doesn’t directly control access to the bucket itself.

D. Adding an S3 ACL to the bucket that has sensitive information: S3 ACLs control access to individual objects within the bucket but are not used to restrict access to the entire bucket within a VPC.

E. Restricting users using the IAM policy to use the specific bucket: IAM policies control user or role permissions for accessing AWS services, but they don’t control VPC-level access to S3 buckets.

35
Q

A web application runs on Amazon EC2 instances behind an Application Load Balancer. The application allows users to create custom reports of historical weather data. Generating a report can take up to 5 minutes. These long-running requests use many of the available incoming connections, making the system unresponsive to other users.
How can a solutions architect make the system more responsive?

A. Use Amazon SQS with AWS Lambda to generate reports.

B. Increase the idle timeout on the Application Load Balancer to 5 minutes.

C. Update the client-side application code to increase its request timeout to 5 minutes.

D. Publish the reports to Amazon S3 and use Amazon CloudFront for downloading to the user.

A

B. Increasing the idle timeout on the Application Load Balancer to 5 minutes: This option would increase the timeout for connections between the client and the load balancer, but it doesn’t solve the problem of tying up resources for report generation within the web application.

C. Updating the client-side application code to increase its request timeout to 5 minutes: Changing the client-side request timeout would simply delay the response to the user but wouldn’t make the system more responsive to other users.

D. Publishing the reports to Amazon S3 and using Amazon CloudFront for downloading to the user: While this approach might improve the delivery of generated reports to users, it does not address the issue of the web application becoming unresponsive during report generation.

36
Q

A solutions architect must create a highly available bastion host architecture. The solution needs to be resilient within a single AWS Region and should require only minimal effort to maintain.
What should the solutions architect do to meet these requirements?

A. Create a Network Load Balancer backed by an Auto Scaling group with a UDP listener.

B. Create a Network Load Balancer backed by a Spot Fleet with instances in a partition placement group.

C. Create a Network Load Balancer backed by the existing servers in different Availability Zones as the target.

D. Create a Network Load Balancer backed by an Auto Scaling group with instances in multiple Availability Zones as the target.

A

A. Create a Network Load Balancer backed by an Auto Scaling group with a UDP listener: This option mentions using a UDP listener, which is typically used for specific UDP-based applications, and may not be appropriate for a bastion host use case.

B. Create a Network Load Balancer backed by a Spot Fleet with instances in a partition placement group: While Spot Instances are cost-effective, they are not guaranteed to be available and can be terminated with short notice. For a bastion host, it’s important to have reliable and stable instances.

C. Create a Network Load Balancer backed by the existing servers in different Availability Zones as the target: This option implies using existing servers as the targets, which may not provide the desired level of redundancy and may require manual intervention to maintain high availability.

37
Q

A three-tier web application processes orders from customers. The web tier consists of Amazon EC2 instances behind an Application Load Balancer, a middle tier of three EC2 instances decoupled from the web tier using Amazon SQS, and an Amazon DynamoDB backend. At peak times, customers who submit orders using the site have to wait much longer than normal to receive confirmations due to lengthy processing times. A solutions architect needs to reduce these processing times.
Which action will be MOST effective in accomplishing this?

A. Replace the SQS queue with Amazon Kinesis Data Firehose.

B. Use Amazon ElastiCache for Redis in front of the DynamoDB backend tier.

C. Add an Amazon CloudFront distribution to cache the responses for the web tier.

D. Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SQS queue depth.

A

Options A and B are not directly related to addressing the bottleneck in the middle tier and may not have a significant impact on processing times for orders.

Option C, adding Amazon CloudFront to cache responses for the web tier, can help improve the latency experienced by end-users but may not directly address the processing delays in the middle tier that are causing orders to take longer to confirm. Therefore, while it can be beneficial for overall performance, it might not be the MOST effective solution for reducing processing times for orders during peak periods.

38
Q

A company relies on an application that needs at least 4 Amazon EC2 instances during regular traffic and must scale up to 12 EC2 instances during peak loads.
The application is critical to the business and must be highly available.
Which solution will meet these requirements?

A. Deploy the EC2 instances in an Auto Scaling group. Set the minimum to 4 and the maximum to 12, with 2 in Availability Zone A and 2 in Availability Zone B.

B. Deploy the EC2 instances in an Auto Scaling group. Set the minimum to 4 and the maximum to 12, with all 4 in Availability Zone A.

C. Deploy the EC2 instances in an Auto Scaling group. Set the minimum to 8 and the maximum to 12, with 4 in Availability Zone A and 4 in Availability Zone B.

D. Deploy the EC2 instances in an Auto Scaling group. Set the minimum to 8 and the maximum to 12, with all 8 in Availability Zone A.

A

A. Deploy the EC2 instances in an Auto Scaling group. Set the minimum to 4 and the maximum to 12, with 2 in Availability Zone A and 2 in Availability Zone B: This option provides high availability by distributing instances across two Availability Zones. However, it may not scale up enough during peak loads (limited to 4 instances in total).

B. Deploy the EC2 instances in an Auto Scaling group. Set the minimum to 4 and the maximum to 12, with all 4 in Availability Zone A: This option does not provide high availability because all instances are in a single Availability Zone.

D. Deploy the EC2 instances in an Auto Scaling group. Set the minimum to 8 and the maximum to 12, with all 8 in Availability Zone A: This option does not provide high availability because all instances are in a single Availability Zone.

39
Q

A solutions architect must design a solution for a persistent database that is being migrated from on-premises to AWS. The database requires 64,000 IOPS according to the database administrator. If possible, the database administrator wants to use a single Amazon Elastic Block Store (Amazon EBS) volume to host the database instance.
Which solution effectively meets the database administrator’s criteria?

A. Use an instance from the I3 I/O optimized family and leverage local ephemeral storage to achieve the IOPS requirement.

B. Create a Nitro-based Amazon EC2 instance with an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1) volume attached. Configure the volume to have 64,000 IOPS.

C. Create and map an Amazon Elastic File System (Amazon EFS) volume to the database instance and use the volume to achieve the required IOPS for the database.

D. Provision two volumes and assign 32,000 IOPS to each. Create a logical volume at the operating system level that aggregates both volumes to achieve the IOPS requirements.

A

Options A and D are not ideal because they involve complex configurations or the use of local ephemeral storage, which may not be as reliable or scalable as Amazon EBS volumes.

Option C suggests using Amazon Elastic File System (Amazon EFS), but EFS does not provide the level of IOPS control and performance needed for high I/O database workloads. It’s better suited for file storage use cases.

40
Q

A solutions architect is designing an architecture for a new application that requires low network latency and high network throughput between Amazon EC2 instances. Which component should be included in the architectural design?

A. An Auto Scaling group with Spot Instance types.

B. A placement group using a cluster placement strategy.

C. A placement group using a partition placement strategy.

D. An Auto Scaling group with On-Demand instance types.

A

Option A (Auto Scaling group with Spot Instance types) does not guarantee that instances will be placed in the same physical proximity, which could result in variable network latency.

Option C (a placement group using a partition placement strategy) is designed for distributed, fault-tolerant workloads rather than achieving low latency between instances.

Option D (Auto Scaling group with On-Demand instance types) does not address the specific requirements of low network latency and high network throughput and does not control the placement of instances in the same physical cluster.

41
Q

A company has global users accessing an application deployed in different AWS Regions, exposing public static IP addresses. The users are experiencing poor performance when accessing the application over the internet.
What should a solutions architect recommend to reduce internet latency?

A. Set up AWS Global Accelerator and add endpoints.

B. Set up AWS Direct Connect locations in multiple Regions.

C. Set up an Amazon CloudFront distribution to access an application.

D. Set up an Amazon Route 53 geoproximity routing policy to route traffic.

A

Option B (AWS Direct Connect) is not designed to reduce internet latency for global users. It provides dedicated network connections between on-premises locations and AWS, but it doesn’t optimize internet traffic for global users.

Option C (Amazon CloudFront) is a content delivery network (CDN) service that can improve content delivery by caching content at edge locations. While it can help with content delivery, it may not be sufficient for optimizing applications with dynamic content and interactive functionality.

Option D (Amazon Route 53 geoproximity routing) can route traffic based on the geographic location of users, but it doesn’t have the same level of global network optimization as AWS Global Accelerator.

42
Q

A company wants to migrate a workload to AWS. The chief information security officer requires that all data be encrypted at rest when stored in the cloud. The company wants complete control of encryption key lifecycle management.
The company must be able to immediately remove the key material and audit key usage independently of AWS CloudTrail. The chosen services should integrate with other storage services that will be used on AWS.
Which services satisfies these security requirements?

A. AWS CloudHSM with the CloudHSM client

B. AWS Key Management Service (AWS KMS) with AWS CloudHSM

C. AWS Key Management Service (AWS KMS) with an external key material origin

D. AWS Key Management Service (AWS KMS) with AWS managed customer master keys (CMKs)

A

Option A (AWS CloudHSM with the CloudHSM client) is not the best choice because it does not provide direct integration with AWS KMS for managing keys and policies, which could complicate the management of encryption keys across different AWS services.

Option C (AWS KMS with an external key material origin) is not the best choice because it does not provide the same level of control and security as AWS CloudHSM, and it may not meet the requirement for immediate key removal.

Option D (AWS KMS with AWS managed customer master keys) does not provide the same level of control over key lifecycle management and key removal as AWS CloudHSM.

43
Q

A company recently deployed a two-tier application in two Availability Zones in the us-east-1 Region. The databases are deployed in a private subnet while the web servers are deployed in a public subnet. An internet gateway is attached to the VPC. The application and database run on Amazon EC2 instances. The database servers are unable to access patches on the internet. A solutions architect needs to design a solution that maintains database security with the least operational overhead.
Which solution meets these requirements?

A. Deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route.

B. Deploy a NAT gateway inside the private subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route.

C. Deploy two NAT instances inside the public subnet for each Availability Zone and associate them with Elastic IP addresses. Update the routing table of the private subnet to use it as the default route.

D. Deploy two NAT instances inside the private subnet for each Availability Zone and associate them with Elastic IP addresses. Update the routing table of the private subnet to use it as the default route.

A

Option B suggests deploying a NAT gateway inside the private subnet, which is less common and not recommended for security reasons because it exposes the NAT gateway to the private subnet, potentially allowing instances in the private subnet to route traffic directly to the internet.

Option C and D involve using NAT instances, which require more operational management compared to NAT gateways and may not provide the same level of reliability and scalability.

44
Q

A company has an application with a REST-based interface that allows data to be received in near-real time from a third-party vendor. Once received, the application processes and stores the data for further analysis. The application is running on Amazon EC2 instances.
The third-party vendor has received many 503 Service Unavailable Errors when sending data to the application. When the data volume spikes, the compute capacity reaches its maximum limit and the application is unable to process all requests.
Which design should a solutions architect recommend to provide a more scalable solution?

A. Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions.

B. Use Amazon API Gateway on top of the existing application. Create a usage plan with a quota limit for the third-party vendor.

C. Use Amazon Simple Notification Service (Amazon SNS) to ingest the data. Put the EC2 instances in an Auto Scaling group behind an Application Load Balancer.

D. Repackage the application as a container. Deploy the application using Amazon Elastic Container Service (Amazon ECS) using the EC2 launch type with an Auto Scaling group.

A

Option B using Amazon API Gateway with usage plans and quotas may help control access but doesn’t inherently address the scalability issue of processing incoming data.

Option C using Amazon Simple Notification Service (SNS) and an Auto Scaling group with an Application Load Balancer might work for some use cases, but it doesn’t leverage the real-time processing capabilities of Kinesis Data Streams and AWS Lambda for near-real-time data ingestion and processing.

Option D suggests repackaging the application as a container and deploying it using Amazon Elastic Container Service (Amazon ECS) with an Auto Scaling group. While ECS can provide scalability, it adds complexity compared to the serverless approach using Kinesis Data Streams and AWS Lambda.

45
Q

A solutions architect needs to design a low-latency solution for a static single-page application accessed by users utilizing a custom domain name. The solution must be serverless, encrypted in transit, and cost-effective.
Which combination of AWS services and features should the solutions architect use? (Choose two.)

A. Amazon S3

B. Amazon EC2

C. AWS Fargate

D. Amazon CloudFront

E. Elastic Load Balancer

A

Amazon EC2 (Option B): EC2 instances are not serverless and would require manual management, which is not cost-effective for serving static content.

AWS Fargate (Option C): AWS Fargate is a container orchestration service and is not suitable for serving static single-page applications, as it adds unnecessary complexity.

Elastic Load Balancer (Option E): Elastic Load Balancer is used for load balancing traffic to multiple instances and is not directly related to serving static content for a single-page application.

46
Q

A company is migrating to the AWS Cloud. A file server is the first workload to migrate. Users must be able to access the file share using the Server Message
Block (SMB) protocol. Which AWS managed service meets these requirements?

A. Amazon Elastic Block Store (Amazon EBS)

B. Amazon EC2

C. Amazon FSx

D. Amazon S3

A

Amazon Elastic Block Store (Amazon EBS) (Option A): Amazon EBS provides block-level storage volumes that can be attached to Amazon EC2 instances. It’s not a managed file system service and doesn’t directly provide SMB file sharing capabilities.

Amazon EC2 (Option B): While you can set up your own file server on Amazon EC2 instances, it would require manual configuration and management, including the setup of the SMB service. It’s not a fully managed service for file sharing.

Amazon S3 (Option D): Amazon S3 is an object storage service and is not designed for traditional file sharing using the SMB protocol. It uses a different storage paradigm and is typically accessed using HTTP/HTTPS rather than SMB.

47
Q

A solutions architect is designing a customer-facing application. The application is expected to have a variable amount of reads and writes depending on the time of year and clearly defined access patterns throughout the year. Management requires that database auditing and scaling be managed in the AWS Cloud. The
Recovery Point Objective (RPO) must be less than 5 hours.
Which solutions can accomplish this? (Choose two.)

A. Use Amazon DynamoDB with auto scaling. Use on-demand backups and AWS CloudTrail.

B. Use Amazon DynamoDB with auto scaling. Use on-demand backups and Amazon DynamoDB Streams.

C. Use Amazon Redshift Configure concurrency scaling. Enable audit logging. Perform database snapshots every 4 hours.

D. Use Amazon RDS with Provisioned IOPS. Enable the database auditing parameter. Perform database snapshots every 5 hours.

E. Use Amazon RDS with auto scaling. Enable the database auditing parameter. Configure the backup retention period to at least 1 day.

A

AE

48
Q

A company has migrated an on-premises Oracle database to an Amazon RDS for Oracle Multi-AZ DB instance in the us-east-l Region. A solutions architect is designing a disaster recovery strategy to have the database provisioned in the us-west-2 Region in case the database becomes unavailable in the us-east-1
Region. The design must ensure the database is provisioned in the us-west-2 Region in a maximum of 2 hours, with a data loss window of no more than 3 hours.
How can these requirements be met?

A. Edit the DB instance and create a read replica in us-west-2. Promote the read replica to master in us-west-2 in case the disaster recovery environment needs to be activated.

B. Select the multi-Region option to provision a standby instance in us-west-2. The standby instance will be automatically promoted to master in us-west-2 in case the disaster recovery environment needs to be created.

C. Take automated snapshots of the database instance and copy them to us-west-2 every 3 hours. Restore the latest snapshot to provision another database instance in us-west-2 in case the disaster recovery environment needs to be activated.

D. Create a multimaster read/write instances across multiple AWS Regions. Select VPCs in us-east-1 and us-west-2 to make that deployment. Keep the master read/write instance in us-west-2 available to avoid having to activate a disaster recovery environment.

A

Option A: This option involves creating a read replica in us-west-2, which can be used for read scaling but isn’t designed for automatic disaster recovery. Promoting a read replica to a master involves manual intervention and may not meet the 2-hour provisioning and 3-hour data loss window requirements.

Option C: While this approach can provide backups, it relies on manual steps for provisioning and may not meet the 2-hour provisioning requirement. Additionally, copying snapshots and restoring instances may result in a longer data loss window if you have to wait for the latest snapshot to be copied and restored.

Option D: Creating a multimaster setup across multiple AWS Regions can be complex and challenging to maintain. It might lead to consistency and latency issues, and it doesn’t guarantee the same level of data durability and recovery as the Multi-Region feature built for Amazon RDS. Keeping a master read/write instance in us-west-2 continuously available may not be cost-effective or necessary for disaster recovery purposes.

49
Q

A monolithic application was recently migrated to AWS and is now running on a single Amazon EC2 instance. Due to application limitations, it is not possible to use automatic scaling to scale out the application. The chief technology officer (CTO) wants an automated solution to restore the EC2 instance in the unlikely event the underlying hardware fails.
What would allow for automatic recovery of the EC2 instance as quickly as possible?

A. Configure an Amazon CloudWatch alarm that triggers the recovery of the EC2 instance if it becomes impaired.

B. Configure an Amazon CloudWatch alarm to trigger an SNS message that alerts the CTO when the EC2 instance is impaired.

C. Configure AWS CloudTrail to monitor the health of the EC2 instance, and if it becomes impaired, trigger instance recovery.

D. Configure an Amazon EventBridge event to trigger an AWS Lambda function once an hour that checks the health of the EC2 instance and triggers instance recovery if the EC2 instance is unhealthy.

A

Option B involves alerting the CTO but doesn’t address automated recovery.

Option C mentions AWS CloudTrail, which is more about monitoring and auditing rather than automated recovery.

Option D involves a periodic check by an AWS Lambda function, which may not respond as quickly as a CloudWatch alarm directly monitoring the instance’s health.

50
Q

A solutions architect is working on optimizing a legacy document management application running on Microsoft Windows Server in an on-premises data center.
The application stores a large number of files on a network file share. The chief information officer wants to reduce the on-premises data center footprint and minimize storage costs by moving on-premises storage to AWS.
What should the solutions architect do to meet these requirements?

A. Set up an AWS Storage Gateway file gateway.

B. Set up Amazon Elastic File System (Amazon EFS)

C. Set up AWS Storage Gateway as a volume gateway

D. Set up an Amazon Elastic Block Store (Amazon EBS) volume.

A

Options B, C, and D do not directly address the requirement to move on-premises storage to AWS while minimizing costs for a document management application. Amazon Elastic File System (Amazon EFS) is also a suitable service, but it doesn’t necessarily reduce on-premises data center footprint and might not be as cost-effective for certain use cases as using a file gateway with Amazon S3.