Q151-200 Flashcards

1
Q

A solutions architect is designing a hybrid application using the AWS cloud. The network between the on-premises data center and AWS will use an AWS Direct Connect (DX) connection. The application connectivity between AWS and the on-premises data center must be highly resilient. Which DX configuration should be implemented to meet these requirements?

A. Configure a DX connection with a VPN on top of it.

B. Configure DX connections at multiple DX locations.

C. Configure a DX connection using the most reliable DX partner.

D. Configure multiple virtual interfaces on top of a DX connection.

A

Option A (Configure a DX connection with a VPN on top of it): While it can add encryption and security through a VPN, it doesn’t inherently address geographic diversity or redundancy.

Option C (Configure a DX connection using the most reliable DX partner): This option focuses on selecting a reliable DX partner but does not inherently provide redundancy or geographic diversity.

Option D (Configure multiple virtual interfaces on top of a DX connection): This option offers redundancy within a single DX connection but does not address geographic diversity across multiple DX locations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company runs an application on Amazon EC2 instances. The application is deployed in private subnets in three Availability Zones of the us-east-1 Region. The instances must be able to connect to the internet to download files. The company wants a design that is highly available across the Region. Which solution should be implemented to ensure that there are no disruptions to internet connectivity?

A. Deploy a NAT instance in a private subnet of each Availability Zone.

B. Deploy a NAT gateway in a public subnet of each Availability Zone.

C. Deploy a transit gateway in a private subnet of each Availability Zone.

D. Deploy an internet gateway in a public subnet of each Availability Zone.

A

Option A, deploying a NAT instance in each private subnet, is not recommended because it requires manual management, is less scalable, and may not provide the same level of high availability as NAT gateways.

Option C, deploying a transit gateway in private subnets, is used for connecting multiple VPCs or on-premises networks, and it’s not the appropriate solution for providing internet connectivity to EC2 instances.

Option D, deploying an internet gateway in public subnets, is used for inbound and outbound internet traffic and is not suitable for providing outbound internet connectivity for resources in private subnets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Application developers have noticed that a production application is very slow when business reporting users run large production reports against the Amazon RDS instance backing the application. The CPU and memory utilization metrics for the RDS instance do not exceed 60% while the reporting queries are running. The business reporting users must be able to generate reports without affecting the application’s performance. Which action will accomplish this?

A. Increase the size of the RDS instance.

B. Create a read replica and connect the application to it.

C. Enable multiple Availability Zones on the RDS instance.

D. Create a read replica and connect the business reports to it.

A

Option A, increasing the size of the RDS instance, may provide some improvement but doesn’t address the isolation of reporting queries from the application, and it can be costly.

Option B, creating a read replica and connecting the application to it, may improve application performance but doesn’t isolate reporting queries, which are typically resource-intensive.

Option C, enabling multiple Availability Zones on the RDS instance, enhances availability and fault tolerance but doesn’t directly address the performance issue related to reporting queries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company is running a two-tier ecommerce website using AWS services. The current architect uses a publish-facing Elastic Load Balancer that sends traffic to Amazon EC2 instances in a private subnet. The static content is hosted on EC2 instances, and the dynamic content is retrieved from a MySQL database. The application is running in the United States. The company recently started selling to users in Europe and Australia. A solutions architect needs to design solution so their international users have an improved browsing experience. Which solution is MOST cost-effective?

A. Host the entire website on Amazon S3.

B. Use Amazon CloudFront and Amazon S3 to host static images.

C. Increase the number of public load balancers and EC2 instances.

D. Deploy the two-tier website in AWS Regions in Europe and Australia.

A

Option A, hosting the entire website on Amazon S3, may not be suitable if the website relies on dynamic content served from EC2 instances and a MySQL database.

Option C, increasing the number of public load balancers and EC2 instances, could be costly and complex, and it may not provide the same level of latency reduction as a CDN like CloudFront.

Option D, deploying the two-tier website in AWS Regions in Europe and Australia, would increase operational complexity and cost significantly, as it involves maintaining separate infrastructure in multiple regions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company’s website provides users with downloadable historical performance reports. The website needs a solution that will scale to meet the company’s website demands globally. The solution should be cost-effective, limit the provisioning of infrastructure resources, and provide the fastest possible response time. Which combination should a solutions architect recommend to meet these requirements?

A. Amazon CloudFront and Amazon S3

B. AWS Lambda and Amazon DynamoDB

C. Application Load Balancer with Amazon EC2 Auto Scaling

D. Amazon Route 53 with internal Application Load Balancers

A

Option B (AWS Lambda and Amazon DynamoDB) is not the best fit for serving static downloadable reports, as it’s typically used for processing and serverless compute tasks, not for serving files.

Option C (Application Load Balancer with Amazon EC2 Auto Scaling) and Option D (Amazon Route 53 with internal Application Load Balancers) are more appropriate for dynamic web applications that require server-side processing. They may not be as cost-effective for serving static files and may not provide the same level of global distribution and low-latency response as Amazon CloudFront and S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company wants to deploy a shared file system for its .NET application servers and Microsoft SQL Server databases running on Amazon EC2 instances with Windows Server 2016. The solution must be able to be integrated into the corporate Active Directory domain, be highly durable, be managed by AWS, and provide high levels of throughput and IOPS. Which solution meets these requirements?

A. Use Amazon FSx for Windows File Server.

B. Use Amazon Elastic File System (Amazon EFS).

C. Use AWS Storage Gateway in file gateway mode.

D. Deploy a Windows file server on two On Demand instances across two Availability Zones.

A

Option B (Amazon Elastic File System or Amazon EFS) can also provide shared file storage and integration with Active Directory, but it may not offer the same level of throughput and IOPS performance as Amazon FSx, and it’s not designed specifically for Windows environments.

Option C (AWS Storage Gateway in file gateway mode) is primarily used for hybrid cloud storage scenarios and may not provide the same level of performance as Amazon FSx for Windows File Server.

Option D (Deploying a Windows file server on EC2 instances) would require manual management and may not provide the same level of durability and ease of integration with Active Directory as Amazon FSx.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company that develops web applications has launched hundreds of Application Load Balancers (ALBs) in multiple Regions. The company wants to create an allow list for the IPs of all the load balancers on its firewall device. A solutions architect is looking for a one-time, highly available solution to address this request, which will also help reduce the number of IPs that need to be allowed by the firewall. What should the solutions architect recommend to meet these requirements?

A. Create a AWS Lambda function to keep track of the IPs for all the ALBs in different Regions. Keep refreshing this list.

B. Set up a Network Load Balancer (NLB) with Elastic IPs. Register the private IPs of all the ALBs as targets to this NLB.

C. Launch AWS Global Accelerator and create endpoints for all the Regions. Register all the ALBs in different Regions to the corresponding endpoints.

D. Set up an Amazon EC2 instance, assign an Elastic IP to this EC2 instance, and configure the instance as a proxy to forward traffic to all the ALBs.

A

A. Creating a Lambda function to keep track of ALB IPs would require ongoing maintenance and might not provide the necessary global reach and performance.

B. Using an NLB with Elastic IPs to register ALB IPs as targets would also require manual management and might not provide the global reach needed.

D. Setting up an EC2 instance as a proxy adds unnecessary complexity and potential points of failure to the architecture.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company runs an application using Amazon ECS. The application creates resized versions of an original image and then makes Amazon S3 API calls to store the resized images in Amazon S3. How can a solutions architect ensure that the application has permission to access Amazon S3?

A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container.

B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task definition.

C. Create a security group that allows access from Amazon ECS to Amazon S3, and update the launch configuration used by the ECS cluster.

D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS cluster while logged in as this account.

A

The other options mentioned (A, C, and D) involve less secure or less efficient methods of granting permissions and are not the recommended approach for managing permissions in an ECS environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company is planning to migrate its virtual server-based workloads to AWS. The company has internet-facing load balancers backed by application servers. The application servers rely on patches from an internet-hosted repository. Which services should a solutions architect recommend be hosted on the public subnet? (Choose two.)

A. NAT gateway

B. Amazon RDS DB instances

C. Application Load Balancers

D. Amazon EC2 application servers

E. Amazon Elastic File System (Amazon EFS) volumes

A

B. Amazon RDS DB instances: Database instances are typically placed in private subnets to enhance security. Access to the databases can be controlled through security groups and network ACLs, and they do not need direct internet access. The application servers in the private subnet can communicate with the RDS instances.

D. Amazon EC2 application servers: Application servers should be placed in private subnets to minimize their exposure to the internet. They can securely access external repositories for patches via a NAT gateway or other controlled outbound access.

E. Amazon Elastic File System (Amazon EFS) volumes: Like application servers, Amazon EFS volumes are usually placed in private subnets to maintain security and control access. They can be accessed by the application servers in the private subnet, and outbound access to the internet for patching or updates can be controlled via a NAT gateway.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company has established a new AWS account. The account is newly provisioned and no changed have been made to the default settings. The company is concerned about the security of the AWS account root user. What should be done to secure the root user?

A. Create IAM users for daily administrative tasks. Disable the root user.

B. Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.

C. Generate an access key for the root user. Use the access key for daily administration tasks instead of the AWS Management Console.

D. Provide the root user credentials to the most senior solutions architect. Have the solutions architect use the root user for daily administration tasks.

A

A. Create IAM users for daily administrative tasks. Disable the root user: Disabling the root user entirely can be risky because it might be needed in emergency situations. It’s generally better to secure it with MFA rather than disabling it.

C. Generate an access key for the root user. Use the access key for daily administration tasks instead of the AWS Management Console: This approach is not recommended because using access keys for the root user in daily tasks can be less secure than using IAM users with controlled permissions.

D. Provide the root user credentials to the most senior solutions architect. Have the solutions architect use the root user for daily administration tasks: Sharing root user credentials and using them for daily tasks is not a best practice and poses a security risk. It’s important to avoid sharing root credentials and instead use IAM users with fine-grained permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company is using a tape backup solution to store its key application data offsite. The daily data volume is around 50 TB. The company needs to retain the backups for 7 years for regulatory purposes. The backups are rarely accessed, and a week’s notice is typically given if a backup needs to be restored. The company is now considering a cloud-based option to reduce the storage costs and operational burden of managing tapes. The company also wants to make sure that the transition from tape backups to the cloud minimizes disruptions. Which storage solution is MOST cost-effective?

A. Use Amazon Storage Gateway to back up to Amazon Glacier Deep Archive.

B. Use AWS Snowball Edge to directly integrate the backups with Amazon S3 Glacier.

C. Copy the backup data to Amazon S3 and create a lifecycle policy to move the data to Amazon S3 Glacier.

D. Use Amazon Storage Gateway to back up to Amazon S3 and create a lifecycle policy to move the backup to Amazon S3 Glacier.

A

B. Use AWS Snowball Edge to directly integrate the backups with Amazon S3 Glacier: Less suitable due to complexity, logistical challenges, and operational overhead for daily backup operations.

C. Copy the backup data to Amazon S3 and create a lifecycle policy to move the data to Amazon S3 Glacier: Less suitable due to potentially higher initial storage costs and added complexity in managing data movement.

D. Use Amazon Storage Gateway to back up to Amazon S3 and create a lifecycle policy to move the backup to Amazon S3 Glacier: Less suitable due to similar reasons as Option C, including higher initial storage costs and the need for ongoing policy management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company requires a durable backup storage solution for its on-premises database servers while ensuring on-premises applications maintain access to these backups for quick recovery. The company will use AWS storage services as the destination for these backups. A solutions architect is designing a solution with minimal operational overhead. Which solution should the solutions architect implement?

A. Deploy an AWS Storage Gateway file gateway on-premises and associate it with an Amazon S3 bucket.

B. Back up the databases to an AWS Storage Gateway volume gateway and access it using the Amazon S3 API.

C. Transfer the database backup files to an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 instance.

D. Back up the database directly to an AWS Snowball device and use lifecycle rules to move the data to Amazon S3 Glacier Deep Archive.

A

Option B (backing up to an AWS Storage Gateway volume gateway) is also a valid approach but may introduce additional complexity in managing storage volumes compared to a file gateway.

Option C (transferring backup files to an Amazon EBS volume) is not as scalable for long-term backups, and it may not provide the same level of durability as Amazon S3.

Option D (using AWS Snowball) is more suitable for offline data transfer and initial large-scale data migration, but it may not be as seamless for ongoing backup and recovery operations as the file gateway solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company decides to migrate its three-tier web application from on-premises to the AWS Cloud. The new database must be capable of dynamically scaling storage capacity and performing table joins.
Which AWS service meets these requirements?

A. Amazon Aurora

B. Amazon RDS for SqlServer

C. Amazon DynamoDB Streams

D. Amazon DynamoDB on-demand

A

Amazon RDS for SQL Server (option B) is also a managed relational database service, but it may not provide the same level of performance and scalability as Amazon Aurora. Additionally, the question does not specify the need for SQL Server compatibility.

Amazon DynamoDB Streams (option C) and Amazon DynamoDB on-demand (option D) are NoSQL database services and do not support table joins in the same way as a relational database system. They are not the best fit for this specific requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company mandates that an Amazon S3 gateway endpoint must allow traffic to trusted buckets only. Which method should a solutions architect implement to meet this requirement?

A. Create a bucket policy for each of the company’s trusted S3 buckets that allows traffic only from the company’s trusted VPCs.

B. Create a bucket policy for each of the company’s trusted S3 buckets that allows traffic only from the company’s S3 gateway endpoint IDs.

C. Create an S3 endpoint policy for each of the company’s S3 gateway endpoints that blocks access from any VPC other than the company’s trusted VPCs.

D. Create an S3 endpoint policy for each of the company’s S3 gateway endpoints that provides access to the Amazon Resource Name (ARN) of the trusted S3 buckets.

A

Options A, B, and C do not provide the same level of granularity and control over access to specific buckets as using an S3 endpoint policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A company is using a VPC peering strategy to connect its VPCs in a single Region to allow for cross-communication. A recent increase in account creations and VPCs has made it difficult to maintain the VPC peering strategy, and the company expects to grow to hundreds of VPCs. There are also new requests to create site-to-site VPNs with some of the VPCs. A solutions architect has been tasked with creating a centrally managed networking setup for multiple accounts, VPCs, and VPNs. Which networking solution meets these requirements?

A. Configure shared VPCs and VPNs and share to each other.

B. Configure a hub-and-spoke VPC and route all traffic through VPC peering.

C. Configure an AWS Direct Connect connection between all VPCs and VPNs.

D. Configure a transit gateway with AWS Transit Gateway and connect all VPCs and VPNs.

A

Option A
Shared VPCs: While shared VPCs can simplify certain aspects of network management, they can become complex and may not be suitable for connecting VPCs across multiple AWS accounts.
VPN Integration: This option does not directly address the requirement for site-to-site VPNs, which are needed for connecting VPCs to on-premises networks.

Option B
Hub-and-Spoke VPC: This approach can work for some scenarios but doesn’t provide the same level of centralized management as AWS Transit Gateway.
VPC Peering: Managing VPC peering connections can become challenging at scale and doesn’t address the requirement for site-to-site VPNs.

Option C
AWS Direct Connect: While Direct Connect can provide dedicated network connections to AWS, it doesn’t directly address the requirements for VPC-to-VPC connectivity or centralized management of VPCs and VPNs. It’s typically used for connecting on-premises data centers to AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A solutions architect is helping a developer design a new ecommerce shopping cart application using AWS services. The developer is unsure of the current database schema and expects to make changes as the ecommerce site grows. The solution needs to be highly resilient and capable of automatically scaling read and write capacity. Which database solution meets these requirements?

A. Amazon Aurora PostgreSQL

B. Amazon DynamoDB with on-demand enabled

C. Amazon DynamoDB with DynamoDB Streams enabled

D. Amazon SQS and Amazon Aurora PostgreSQL

A

Amazon Aurora PostgreSQL (option A) is a solid relational database option, but it may involve more complexity and manual management when it comes to scaling and schema changes compared to DynamoDB with on-demand capacity.

While DynamoDB Streams (option C) can be useful for real-time change tracking, it’s not directly related to the requirement for automatically scaling capacity.

Using Amazon SQS (option D) for a shopping cart application is primarily for message queuing and decoupling components, and it’s not a direct replacement for a database. It can complement a database solution but wouldn’t be the primary choice for storing shopping cart data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A solutions architect must migrate a Windows internet information Services (IIS) web application to AWS. The application currently relies on a file share hosted in the user’s on-premises network-attached storage (NAS). The solutions architected has proposed migrating the IIS web servers to Amazon EC2 instances in multiple Availability Zones that are connected to the storage solution, and configuring an Elastic Load Balancer attached to the instances.
Which replacement to the on-premises file share is MOST resilient and durable?

A. Migrate the file Share to Amazon RDS.

B. Migrate the file Share to AWS Storage Gateway

C. Migrate the file Share to Amazon FSx for Windows File Server.

D. Migrate the file share to Amazon Elastic File System (Amazon EFS)

A

While other options like AWS Storage Gateway (option B), Amazon Elastic File System (Amazon EFS) (option D), and Amazon RDS (option A) have their use cases, they may not provide the same level of durability, resilience, and seamless integration with Windows workloads as Amazon FSx does in this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company needs to implement a relational database with a multi-Region disaster recovery Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of 1 minute. Which AWS solution can achieve this?

A. Amazon Aurora Global Database

B. Amazon DynamoDB global tables

C. Amazon RDS for MySQL with Multi-AZ enabled

D. Amazon RDS for MySQL with a cross-Region snapshot copy

A

Option A (Amazon Aurora Global Database) is a great choice for high availability within a single Region but does not provide multi-Region replication.

Option C (Amazon RDS for MySQL with Multi-AZ enabled) offers high availability within a single Region but does not provide multi-Region disaster recovery capabilities.

Option D (Amazon RDS for MySQL with a cross-Region snapshot copy) allows for cross-Region backups but does not provide the same low RPO and RTO as DynamoDB global tables for real-time data replication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A company runs a web service on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across two Availability Zones. The company needs a minimum of four instances at all times to meet the required service level agreement (SLA) while keeping costs low.
If an Availability Zone fails, how can the company remain compliant with the SLA?

A. Add a target tracking scaling policy with a short cooldown period.

B. Change the Auto Scaling group launch configuration to use a larger instance type.

C. Change the Auto Scaling group to use six servers across three Availability Zones.

D. Change the Auto Scaling group to use eight servers across two Availability Zones.

A

Option A (Add a target tracking scaling policy with a short cooldown period) may help with automatic scaling but does not address the requirement for maintaining a minimum of four instances during an Availability Zone failure.

Option B (Change the Auto Scaling group launch configuration to use a larger instance type) focuses on instance type rather than Availability Zone redundancy.

Option D (Change the Auto Scaling group to use eight servers across two Availability Zones) would not provide adequate redundancy in case of an Availability Zone failure, and it also exceeds the minimum requirement of four instances at all times, which might increase costs unnecessarily.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A company is reviewing its AWS Cloud deployment to ensure its data is not accessed by anyone without appropriate authorization. A solutions architect is tasked with identifying all open Amazon S3 buckets and recording any S3 bucket configuration changes.
What should the solutions architect do to accomplish this?

A. Enable AWS Config service with the appropriate rules

B. Enable AWS Trusted Advisor with the appropriate checks.

C. Write a script using an AWS SDK to generate a bucket report

D. Enable Amazon S3 server access logging and configure Amazon CloudWatch Events.

A

Option B (Enable AWS Trusted Advisor) focuses on cost and security best practices but does not provide detailed visibility into S3 bucket configurations.

Option C (Write a script using an AWS SDK) would require manual effort and ongoing maintenance to monitor changes and identify open S3 buckets.

Option D (Enable Amazon S3 server access logging and configure Amazon CloudWatch Events) is primarily focused on access logging and monitoring, but it doesn’t provide a comprehensive solution for identifying open buckets and tracking configuration changes. AWS Config is a better choice for this purpose.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A company is planning to build a new web application on AWS. The company expects predictable traffic most of the year and very high traffic on occasion. The web application needs to be highly available and fault tolerant with minimal latency. What should a solutions architect recommend to meet these requirements?

A. Use an Amazon Route 53 routing policy to distribute requests to two AWS Regions, each with one Amazon EC2 instance.

B. Use Amazon EC2 instances in an Auto Scaling group with an Application Load Balancer across multiple Availability Zones.

C. Use Amazon EC2 instances in a cluster placement group with an Application Load Balancer across multiple Availability Zones.

D. Use Amazon EC2 instances in a cluster placement group and include the cluster placement group within a new Auto Scaling group.

A

Option A (using Route 53 routing policy with two AWS Regions) might introduce higher latency due to cross-Region communication and is typically used for disaster recovery or global load balancing rather than within a single Region.

Option C (using a cluster placement group with ALB) is suitable for applications with specific low-latency requirements, but it doesn’t provide automatic scaling or the fault tolerance benefits of an Auto Scaling group.

Option D (using a cluster placement group within an Auto Scaling group) is not a standard configuration and may not provide the desired benefits for high availability and fault tolerance in this context. It’s better to use an Auto Scaling group directly with an ALB.

22
Q

A company is designing a web application using AWS that processes insurance quotes. Users will request quotes from the application. Quotes must be separated by quote type must be responded to within 24 hours, and must not be lost. The solution should be simple to set up and maintain. Which solution meets these requirements?

A. Create multiple Amazon Kinesis data streams based on the quote type. Configure the web application to send messages to the proper data stream. Configure each backend group of application servers to pool messages from its own data stream using the Kinesis Client Library (KCL).

B. Create multiple Amazon Simple Notification Service (Amazon SNS) topics and register Amazon SQS queues to their own SNS topic based on the quote type. Configure the web application to publish messages to the SNS topic queue. Configure each backend application server to work its own SQS queue.

C. Create a single Amazon Simple Notification Service (Amazon SNS) topic and subscribe the Amazon SQS queues to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each backend application server to work its own SQS queue.

D. Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon Elasticsearch Service (Amazon ES) cluster. Configure the web application to send messages to the proper delivery stream. Configure each backend group of application servers to search for the messages from Amazon ES and process them accordingly.

A

Option A (using Amazon Kinesis data streams) adds complexity to the solution and is typically used for real-time data streaming scenarios, not for separating and processing quotes.

Option B (using multiple SNS topics and SQS queues) adds unnecessary complexity and may not be as efficient as a single SNS topic with message filtering.

Option D (using Kinesis Data Firehose and Amazon ES) is designed for data ingestion and analytics use cases and is not suitable for this specific requirement of handling insurance quotes.

23
Q

A solutions architect has configured the following IAM policy.
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: [
“lambda:
],
“Resource”: “

},
{
“Effect”: “Deny”,
“Action”: [
“lambda:CreateFunction”,
“lambda:DeleteFunction”
],
“Resource”: “*”
“Condition”: {
“IpAddress”: {
“aws:SourceIp”: “220.100.16.0/20”
}
}
}
]
}

Which action will be allowed by the policy?
A. An AWS Lambda function can be deleted from any network.
B. An AWS Lambda function can be created from any network.
C. An AWS Lambda function can be deleted from the 100.220.0.0/20 network.
D. An AWS Lambda function can be deleted from the 220.100.16.0/20 network.

A

The policy allows actions on AWS Lambda functions with the “Effect”: “Allow” statement. However, it denies the “lambda:CreateFunction” and “lambda:DeleteFunction” actions when the source IP address is from the range “220.100.16.0/20”.

So, the correct answer is:

C. An AWS Lambda function can be deleted from the 100.220.0.0/20 network.

24
Q

A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files. Which storage option meets these requirements?

A. S3 Standard

B. S3 Intelligent-Tiering

C. S3 Standard-Infrequent Access (S3 Standard-IA)

D. S3 One Zone-Infrequent Access (S3 One Zone-IA)

A

A. S3 Standard: While S3 Standard provides high durability and availability, it is more expensive than other storage classes, and it does not automatically optimize costs based on access patterns.

C. S3 Standard-Infrequent Access (S3 Standard-IA): This storage class is designed for objects that are infrequently accessed but still require the high availability and durability of the S3 Standard class. It may not be the most cost-effective option for rarely accessed files, and it does not provide the automatic tiering features of S3 Intelligent-Tiering.

D. S3 One Zone-Infrequent Access (S3 One Zone-IA): This option stores data in a single Availability Zone, which means it does not provide the desired resilience to the loss of an Availability Zone. It is not recommended for data that needs to be highly available and resilient.

25
Q

A company is running a three-tier web application to process credit card payments. The front-end user interface consists of static webpages. The application tier can have long-running processes. The database tier uses MySQL. The application is currently running on a single, general purpose large Amazon EC2 instance. A solutions architect needs to decouple the services to make the web application highly available. Which solution would provide the HIGHEST availability?

A. Move static assets to Amazon CloudFront. Leave the application in EC2 in an Auto Scaling group. Move the database to Amazon RDS to deploy Multi-AZ.

B. Move static assets and the application into a medium EC2 instance. Leave the database on the large instance. Place both instances in an Auto Scaling group.

C. Move static assets to Amazon S3. Move the application to AWS Lambda with the concurrency limit set. Move the database to Amazon DynamoDB with on- demand enabled.

D. Move static assets to Amazon S3. Move the application to Amazon Elastic Container Service (Amazon ECS) containers with Auto Scaling enabled. Move the database to Amazon RDS to deploy Multi-AZ.

A

A. Move static assets to Amazon CloudFront. Leave the application in EC2 in an Auto Scaling group. Move the database to Amazon RDS to deploy Multi-AZ: This option provides some level of high availability for the database tier using Amazon RDS Multi-AZ, but it doesn’t address the high availability of the application tier. Leaving the application in EC2 in an Auto Scaling group is a good practice, but it may not provide the same level of availability as other options that leverage more scalable and serverless services for the application.

B. Move static assets and the application into a medium EC2 instance. Leave the database on the large instance. Place both instances in an Auto Scaling group: This option does involve using Auto Scaling, but it relies on EC2 instances for both the application and static assets. While Auto Scaling helps with redundancy and scaling, it may not provide the same level of availability and scalability as more cloud-native services like Amazon ECS and Amazon S3. Additionally, it doesn’t address the database tier’s high availability as effectively as Multi-AZ RDS.

C. Move static assets to Amazon S3. Move the application to AWS Lambda with the concurrency limit set. Move the database to Amazon DynamoDB with on-demand enabled: While using Amazon S3 for static assets and DynamoDB for the database can be highly available solutions, this option suggests using AWS Lambda for the application. AWS Lambda is serverless and can handle concurrency, but it may not be suitable for long-running processes, which the application tier may have. Limiting concurrency could also impact performance during spikes in traffic, reducing availability.

26
Q

A media company stores video content in an Amazon Elastic Block Store (Amazon EBS) volume. A certain video file has become popular and a large number of users across the world are accessing this content. This has resulted in a cost increase. Which action will DECREASE cost without compromising user accessibility?

A. Change the EBS volume to Provisioned IOPS (PIOPS).

B. Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution.

C. Split the video into multiple, smaller segments so users are routed to the requested video segments only.

D. Clear an Amazon S3 bucket in each Region and upload the videos so users are routed to the nearest S3 bucket.

A

Options A and C are not directly related to reducing costs for serving popular video content. Changing the EBS volume to Provisioned IOPS (PIOPS) (Option A) is more applicable to block storage performance and may not be cost-effective for serving media content. Splitting the video into smaller segments (Option C) can complicate the application logic and may not significantly reduce costs.

Option D mentions clearing an Amazon S3 bucket in each Region and uploading the videos to minimize latency. While this approach may reduce latency for users by serving content from the nearest Region, it doesn’t inherently decrease costs, and it increases the complexity of managing multiple S3 buckets and data synchronization between them. Using Amazon CloudFront is a more straightforward and cost-effective solution for global content delivery.

27
Q

A solutions architect is designing the cloud architecture for a new application being deployed to AWS. The application allows users to interactively download and upload files. Files older than 2 years will be accessed less frequently. The solutions architect needs to ensure that the application can scale to any number of files while maintaining high availability and durability. Which scalable solutions should the solutions architect recommend? (Choose two.)

A. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Glacier.

B. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard-Infrequent Access (S3
Standard-IA)

C. Store the files on Amazon Elastic File System (Amazon EFS) with a lifecycle policy that moves objects older than 2 years to EFS Infrequent Access (EFS IA).

D. Store the files in Amazon Elastic Block Store (Amazon EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data older than 2 years.

E. Store the files in RAID-striped Amazon Elastic Block Store (Amazon EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data older than 2 years.

A

C. Storing files on Amazon Elastic File System (Amazon EFS) with a lifecycle policy that moves objects older than 2 years to EFS IA is not the best choice because EFS is a file storage service primarily intended for network file systems, and it does not provide the same scalability and cost-effectiveness as Amazon S3 for object storage.

D. Storing files in Amazon Elastic Block Store (Amazon EBS) volumes and scheduling snapshots to archive data older than 2 years is not suitable for this use case because EBS volumes are block storage designed for attaching to EC2 instances and are not intended for scalable file storage.

E. Storing files in RAID-striped Amazon EBS volumes and scheduling snapshots for archiving data older than 2 years is similar to Option D and is not the appropriate solution for scalable, object-level file storage with lifecycle policies.

28
Q

A company has recently updated its internal security standards. The company must now ensure all Amazon S3 buckets and Amazon Elastic Block Store (Amazon
EBS) volumes are encrypted with keys created and periodically rotated by internal security specialists. The company is looking for a native, software-based AWS service to accomplish this goal.
What should a solutions architect recommend as a solution?

A. Use AWS Secrets Manager with customer master keys (CMKs) to store master key material and apply a routine to create a new CMK periodically and replace it in AWS Secrets Manager.

B. Use AWS Key Management Service (AWS KMS) with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in AWS KMS.

C. Use an AWS CloudHSM cluster with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in the CloudHSM cluster nodes.

D. Use AWS Systems Manager Parameter Store with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in the Parameter Store.

A

A. AWS Secrets Manager is primarily used for storing and rotating application secrets and credentials, not for managing encryption keys. While it can store some types of sensitive data, it’s not designed for managing encryption keys used for S3 and EBS encryption.

C. AWS CloudHSM is a hardware-based key management service, and it’s not a native, software-based AWS service. It involves the setup and management of physical hardware, which may be more complex and expensive than using AWS KMS.

D. AWS Systems Manager Parameter Store is also used for storing configuration data and parameters, not for managing encryption keys. It’s not designed for the key rotation and encryption requirements described in the scenario.

29
Q

A company’s dynamic website is hosted using on-premises servers in the United States. The company is launching its product in Europe, and it wants to optimize site loading times for new European users. The site’s backend must remain in the United States. The product is being launched in a few days, and an immediate solution is needed.
What should the solutions architect recommend?

A. Launch an Amazon EC2 instance in us-east-1 and migrate the site to it.

B. Move the website to Amazon S3. Use cross-Region replication between Regions.

C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers.

D. Use an Amazon Route 53 geo-proximity routing policy pointing to on-premises servers.

A

Option A (launching an EC2 instance in us-east-1 and migrating the site) is not an optimal solution because it would still result in higher latency for European users accessing the site from the United States.

Option B (moving the website to Amazon S3 with cross-Region replication) would require replicating the entire website content to an AWS region in Europe, which may not be feasible in a short time frame.

Option D (using Amazon Route 53 geo-proximity routing policy) would direct traffic to the on-premises servers based on geographic proximity, but it doesn’t leverage a CDN like CloudFront to optimize content delivery and reduce latency.

30
Q

A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images. Which method is the MOST cost-effective for hosting the website?

A. Containerize the website and host it in AWS Fargate.

B. Create an Amazon S3 bucket and host the website there.

C. Deploy a web server on an Amazon EC2 instance to host the website.

D. Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework.

A

A. Containerizing the website and hosting it in AWS Fargate (a serverless container service) may be overkill for a simple static website and can incur additional costs associated with container management.

C. Deploying a web server on an Amazon EC2 instance adds the cost of running and maintaining the EC2 instance, which is not necessary for a static website with no server-side processing.

D. Configuring an Application Load Balancer with an AWS Lambda target using the Express.js framework introduces unnecessary complexity and additional costs associated with running Lambda functions, which are typically used for serverless event-driven workloads, not for serving static websites.

31
Q

A company is hosting multiple websites for several lines of business under its registered parent domain. Users accessing these websites will be routed to appropriate backend Amazon EC2 instances based on the subdomain. The websites host static webpages, images, and server-side scripts like PHP and JavaScript. Some of the websites experience peak access during the first two hours of business with constant usage throughout the rest of the day. A solutions architect needs to design a solution that will automatically adjust capacity to these traffic patterns while keeping costs low. Which combination of AWS services or features will meet these requirements? (Choose two.)

A. AWS Batch

B. Network Load Balancer

C. Application Load Balancer

D. Amazon EC2 Auto Scaling

E. Amazon S3 website hosting

A

A. AWS Batch is a service for batch computing workloads, and it is not suitable for hosting websites or handling web traffic.

B. Network Load Balancer (NLB) is a layer 4 load balancer that operates at the transport layer, and it doesn’t have the capabilities for host-based routing like ALB. NLB is typically used for distributing TCP and UDP traffic at the network level.

Option E (Amazon S3 website hosting) is suitable for hosting static webpages and images, but it doesn’t provide the dynamic server-side scripting capabilities required for PHP and JavaScript scripts, nor does it offer the traffic routing capabilities based on subdomains. ALB and EC2 Auto Scaling are better suited for this use case.

32
Q

A company uses an Amazon S3 bucket to store static images for its website. The company configured permissions to allow access to Amazon S3 objects by privileged users only. What should a solutions architect do to protect against data loss? (Choose two.)

A. Enable versioning on the S3 bucket.

B. Enable access logging on the S3 bucket.

C. Enable server-side encryption on the S3 bucket.

D. Configure an S3 lifecycle rule to transition objects to Amazon S3 Glacier.

E. Use MFA Delete to require multi-factor authentication to delete an object.

A

B. Enabling access logging helps with monitoring and auditing access to S3 objects but does not directly protect against data loss.

C. Enabling server-side encryption provides data-at-rest protection but does not prevent data loss in terms of accidental or unauthorized deletions.

D. Configuring an S3 lifecycle rule to transition objects to Amazon S3 Glacier is a cost optimization measure and data archiving strategy but does not directly protect against data loss due to deletions. It helps reduce storage costs for infrequently accessed data.

33
Q

An operations team has a standard that states IAM policies should not be applied directly to users. Some new team members have not been following this standard. The operations manager needs a way to easily identify the users with attached policies. What should a solutions architect do to accomplish this?

A. Monitor using AWS CloudTrail.

B. Create an AWS Config rule to run daily.

C. Publish IAM user changes to Amazon SNS.

D. Run AWS Lambda when a user is modified.

A

Option A (Monitor using AWS CloudTrail) is more focused on tracking user activity and API calls but may not provide an out-of-the-box solution to identify users with attached policies.

Option C (Publish IAM user changes to Amazon SNS) and Option D (Run AWS Lambda when a user is modified) involve custom event-driven solutions and would require more manual setup and coding compared to using AWS Config Rules, which is specifically designed for policy compliance and standards enforcement.

34
Q

A company wants to use an AWS Region as a disaster recovery location for its on-premises infrastructure. The company has 10 TB of existing data, and the on- premise data center has a 1 Gbps internet connection. A solutions architect must find a solution so the company can have its existing data on AWS in 72 hours without transmitting it using an unencrypted channel. Which solution should the solutions architect select?

A. Send the initial 10 TB of data to AWS using FTP.

B. Send the initial 10 TB of data to AWS using AWS Snowball.

C. Establish a VPN connection between Amazon VPC and the company’s data center.

D. Establish an AWS Direct Connect connection between Amazon VPC and the company’s data center.

A

Option A (Sending data via FTP) is not recommended because it may not provide the required security and speed, especially for transferring a large amount of data over a 1 Gbps internet connection.

Options C (Establishing a VPN connection) and D (Establishing an AWS Direct Connect connection) are typically used for network connectivity and data replication but may not be suitable for transferring a large initial dataset within the specified 72-hour timeframe, and they may not address the encryption requirement directly. These options are better suited for ongoing data replication and access rather than the initial data transfer.

35
Q

A company is building applications in containers. The company wants to migrate its on-premises development and operations services from its on-premises data center to AWS. Management states that production systems must be cloud agnostic and use the same configuration and administrator tools across production systems. A solutions architect needs to design a managed solution that will align open-source software. Which solution meets these requirements?

A. Launch the containers on Amazon EC2 with EC2 instance worker nodes.

B. Launch the containers on Amazon Elastic Kubernetes Service (Amazon EKS) and EKS worker nodes.

C. Launch the containers on Amazon Elastic Containers service (Amazon ECS) with AWS Fargate instances.

D. Launch the containers on Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 instance worker nodes.

A

Option A (Launch containers on Amazon EC2 with EC2 instance worker nodes) is not as suitable for ensuring cloud-agnostic production systems and does not provide the same level of container orchestration and management as Amazon EKS.

Option C (Launch containers on Amazon ECS with AWS Fargate instances) is a viable option, but it’s more closely tied to AWS and may not provide the same level of cloud-agnosticism as Amazon EKS. It also uses a different container orchestration platform (ECS) compared to Kubernetes, which may require a different skill set.

Option D (Launch containers on Amazon ECS with Amazon EC2 instance worker nodes) is similar to option C but requires managing EC2 instances for worker nodes, which may add complexity and operational overhead compared to the managed worker nodes in Amazon EKS.

36
Q

A company hosts its website on AWS. To address the highly variable demand, the company has implemented Amazon EC2 Auto Scaling. Management is concerned that the company is over-provisioning its infrastructure, especially at the front end of the three-tier application. A solutions architect needs to ensure costs are optimized without impacting performance. What should the solutions architect do to accomplish this?

A. Use Auto Scaling with Reserved Instances.

B. Use Auto Scaling with a scheduled scaling policy.

C. Use Auto Scaling with the suspend-resume feature.

D. Use Auto Scaling with a target tracking scaling policy.

A

Option A (Use Auto Scaling with Reserved Instances) can provide cost savings but is more suitable for steady-state workloads with predictable traffic. It may not be as efficient for highly variable demand scenarios.

Option B (Use Auto Scaling with a scheduled scaling policy) is useful for scenarios where you have predictable, recurring traffic patterns. It may not be suitable for handling unpredictable spikes in demand.

Option C (Use Auto Scaling with the suspend-resume feature) is typically used for scenarios where you want to stop and start instances on a predefined schedule, which may not align with dynamic and variable demand patterns.

37
Q

A solutions architect is performing a security review of a recently migrated workload. The workload is a web application that consists of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The solutions architect must improve the security posture and minimize the impact of a DDoS attack on resources. Which solution is MOST effective?

A. Configure an AWS WAF ACL with rate-based rules. Create an Amazon CloudFront distribution that points to the Application Load Balancer. Enable the WAF ACL on the CloudFront distribution.

B. Create a custom AWS Lambda function that adds identified attacks into a common vulnerability pool to capture a potential DDoS attack. Use the identified information to modify a network ACL to block access.

C. Enable VPC Flow Logs and store then in Amazon S3. Create a custom AWS Lambda functions that parses the logs looking for a DDoS attack. Modify a network ACL to block identified source IP addresses.

D. Enable Amazon GuardDuty and configure findings written to Amazon CloudWatch. Create an event with CloudWatch Events for DDoS alerts that triggers Amazon Simple Notification Service (Amazon SNS). Have Amazon SNS invoke a custom AWS Lambda function that parses the logs, looking for a DDoS attack. Modify a network ACL to block identified source IP addresses.

A

Option B (Creating a custom Lambda function) and option C (Enabling VPC Flow Logs and creating custom Lambda functions) involve more manual and reactive approaches to DDoS mitigation compared to AWS WAF and CloudFront.

Option D (Enabling Amazon GuardDuty) is a valuable service for threat detection but does not provide the same level of real-time protection and DDoS mitigation capabilities as AWS WAF and CloudFront combined.

38
Q

A company has multiple AWS accounts for various departments. One of the departments wants to share an Amazon S3 bucket with all other department. Which solution will require the LEAST amount of effort?

A. Enable cross-account S3 replication for the bucket.

B. Create a pre-signed URL for the bucket and share it with other departments.

C. Set the S3 bucket policy to allow cross-account access to other departments.

D. Create IAM users for each of the departments and configure a read-only IAM policy.

A

Option A (Enabling cross-account S3 replication) involves a different use case and purpose. It’s used for replicating objects from one bucket to another in a different AWS account, and it may involve more configuration and complexity than necessary for simple read access.

Option B (Creating pre-signed URLs) requires generating pre-signed URLs for each object or resource that needs to be shared. While it’s a valid approach for temporary access, it may not be the most efficient way to share an entire bucket, especially when dealing with multiple departments.

Option D (Creating IAM users for each department) would involve significant effort, as you would need to create and manage IAM users for each department, configure IAM policies, and distribute credentials. This approach is more suitable for fine-grained access control within a single AWS account, but it’s not efficient for sharing a bucket across multiple AWS accounts.

39
Q

A company needs to share an Amazon S3 bucket with an external vendor. The bucket owner must be able to access all objects.
Which action should be taken to share the S3 bucket?

A. Update the bucket to be a Requester Pays bucket.

B. Update the bucket to enable cross-origin resource sharing (CORS).

C. Create a bucket policy to require users to grant bucket-owner-full-control when uploading objects.

D. Create an IAM policy to require users to grant bucket-owner-full-control when uploading objects.

A

Option D (Creating an IAM policy) would apply to users within your AWS account and may not be suitable for external vendors. Option A (Requester Pays) is unrelated to access control. Option B (CORS) is used for controlling access to objects from web pages in different domains and does not enforce permissions like “bucket-owner-full-control.

40
Q

A company is developing a real-time multiplier game that uses UDP for communications between client and servers in an Auto Scaling group. Spikes in demand are anticipated during the day, so the game server platform must adapt accordingly. Developers want to store gamer scores and other non-relational data in a database solution that will scale without intervention. Which solution should a solutions architect recommend?

A. Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.

B. Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data storage.

C. Use a Network Load Balancer for traffic distribution and Amazon Aurora Global Database for data storage.

D. Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for data storage.

A

Option A suggests using Amazon Route 53, a highly scalable and reliable DNS service, for traffic distribution. Route 53 can effectively route traffic to the appropriate game servers within the Auto Scaling group. It provides low-latency DNS resolution, which is crucial for real-time multiplayer games. While Aurora is a powerful and scalable database solution, it is typically used for relational data, and the scenario mentions storing non-relational gamer scores and data.

Option C suggests using Amazon Aurora Global Database, which is a good choice for relational databases but may not be as cost-effective or suitable for non-relational data storage as DynamoDB.

Option D mentions Amazon DynamoDB global tables, which are indeed suitable for scaling DynamoDB across multiple regions, but it doesn’t mention on-demand capacity provisioning, which is important for handling variable workloads efficiently.

41
Q

A company collects temperature, humidity, and atmospheric pressure data in cities across multiple continents. The average volume of data collected per site each day is 500 GB. Each site has a high-speed internet connection. The company’s weather forecasting applications are based in a single Region and analyze the data daily. What is the FASTEST way to aggregate data from all of these global sites?

A. Enable Amazon S3 Transfer Acceleration on the destination bucket. Use multipart uploads to directly upload site data to the destination bucket.

B. Upload site data to an Amazon S3 bucket in the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket.

C. Schedule AWS Snowball jobs daily to transfer data to the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket.

D. Upload the data to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Once a day take an EBS snapshot and copy it to the centralized Region. Restore the EBS volume in the centralized Region and run an analysis on the data daily.

A

B. Uploading data to the closest AWS Region and then using S3 cross-Region replication adds additional time for replication between Regions, and the speed of replication may vary.

C. Scheduling AWS Snowball jobs daily involves physical data transfer using Snowball devices, which may not be as fast as a cloud-native approach and could take longer to complete.

D. Uploading data to an Amazon EC2 instance in the closest Region, taking EBS snapshots, copying them to a centralized Region, and then restoring EBS volumes introduces multiple steps and data movement, potentially leading to longer data transfer times.

42
Q

A company has a custom application running on an Amazon EC instance that:
* Reads a large amount of data from Amazon S3
* Performs a multi-stage analysis
* Writes the results to Amazon DynamoDB
The application writes a significant number of large, temporary files during the multi-stage analysis. The process performance depends on the temporary storage performance.
What would be the fastest storage option for holding the temporary files?

A. Multiple Amazon S3 buckets with Transfer Acceleration for storage.

B. Multiple Amazon Elastic Block Store (Amazon EBS) drives with Provisioned IOPS and EBS optimization.

C. Multiple Amazon Elastic File System (Amazon EFS) volumes using the Network File System version 4.1 (NFSv4.1) protocol.

D. Multiple instance store volumes with software RAID 0.

A

Option A (Multiple Amazon S3 buckets with Transfer Acceleration) involves using S3 for storage, which is object storage optimized for durability and availability but may not provide the same level of performance as instance store volumes for temporary files.

Option B (Multiple Amazon EBS drives with Provisioned IOPS and EBS optimization) is designed for persistent storage and may offer good performance for certain workloads, but it may not be as high-performance as instance store volumes for temporary files.

Option C (Multiple Amazon EFS volumes using NFSv4.1) is a network-attached file system and may not provide the same level of raw performance as instance store volumes, especially for temporary files that require high-speed access.

43
Q

A leasing company generates and emails PDF statements every month for all its customers. Each statement is about 400 KB in size. Customers can download their statements from the website for up to 30 days from when the statements were generated. At the end of their 3-year lease, the customers are emailed a ZIP file that contains all the statements. What is the MOST cost-effective storage solution for this situation?

A. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 1 day.

B. Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier Deep Archive storage after 30 days.

C. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) storage after 30 days.

D. Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days.

A

Option A (Amazon S3 Standard with a transition to S3 Glacier) transitions the data to Amazon S3 Glacier, which is optimized for long-term archival but may not be as cost-effective as Glacier Deep Archive.

Option C (Amazon S3 Standard with a transition to S3 One Zone-IA) transitions the data to S3 One Zone-IA, which offers lower costs but does not provide the same level of durability as S3 Glacier Deep Archive.

Option D (Amazon S3 Standard-Infrequent Access with a transition to S3 Glacier) transitions the data to S3 Glacier, which may not be as cost-effective as Glacier Deep Archive for long-term archival purposes.

44
Q

A company recently released a new type of internet-connected sensor. The company is expecting to sell thousands of sensors, which are designed to stream high volumes of data each second to a central location. A solutions architect must design a solution that ingests and stores data so that engineering teams can analyze it in near-real time with millisecond responsiveness. Which solution should the solutions architect recommend?

A. Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift.

B. Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB.

C. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift.

D. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB.

A

Option A (Amazon SQS queue with data stored in Amazon Redshift) introduces additional latency as Redshift is not designed for real-time data ingestion and may not provide the same responsiveness as DynamoDB.

Option B (Amazon SQS queue with data stored in Amazon DynamoDB) is similar to option D but may have higher latency compared to Kinesis Data Streams for real-time data ingestion.

Option C (Amazon Kinesis Data Streams with data stored in Amazon Redshift) can work, but using DynamoDB for storing real-time sensor data is generally a more suitable choice for millisecond responsiveness.

45
Q

A website runs a web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the custom application consistently takes 1 minute to initiate upon boot up before responding to user requests. How should a solutions architect redesign the architecture to better respond to changing traffic?

A. Configure a Network Load Balancer with a slow start configuration.

B. Configure AWS ElastiCache for Redis to offload direct requests to the servers.

C. Configure an Auto Scaling step scaling policy with an instance warmup condition.

D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.

A

Option A (Configure a Network Load Balancer with a slow start configuration) may help with traffic distribution but does not address the instance warmup requirement, which is essential for ensuring responsiveness during boot up.

Option B (Configure AWS ElastiCache for Redis to offload direct requests to the servers) is more suitable for caching data and offloading read-heavy workloads, but it may not directly address the boot time issue or handle the traffic bursts.

Option D (Configure Amazon CloudFront to use an Application Load Balancer as the origin) primarily focuses on content delivery and distribution and may not directly address the boot time issue or provide a solution for handling traffic bursts.

46
Q

A company is concerned that two NAT instances in use will no longer be able to support the traffic needed for the company’s application. A solutions architect wants to implement a solution that is highly available fault tolerant, and automatically scalable.
What should the solutions architect recommend?

A. Remove the two NAT instances and replace them with two NAT gateways in the same Availability Zone.

B. Use Auto Scaling groups with Network Load Balancers for the NAT instances in different Availability Zones.

C. Remove the two NAT instances and replace them with two NAT gateways in different Availability Zones.

D. Replace the two NAT instances with Spot Instances in different Availability Zones and deploy a Network Load Balancer.

A

Option A (Replacing with two NAT gateways in the same Availability Zone) does not provide the necessary fault tolerance across Availability Zones, which is a key requirement for high availability.

Option B (Using Auto Scaling groups with Network Load Balancers for NAT instances in different Availability Zones) introduces complexity and may not provide the same level of reliability as NAT gateways.

Option D (Replacing with Spot Instances in different Availability Zones and deploying a Network Load Balancer) is less suitable for NAT instances, as Spot Instances can be terminated with little notice and may not offer the same level of stability as NAT gateways.

47
Q

A company operates a website on Amazon EC2 Linux instances. Some of the instances are failing. Troubleshooting points to insufficient swap space on the failed instances. The operations team lead needs a solution to monitor this. What should a solutions architect recommend?

A. Configure an Amazon CloudWatch SwapUsage metric dimension. Monitor the SwapUsage dimension in the EC2 metrics in CloudWatch.

B. Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics. Monitor SwapUsage metrics in CloudWatch.

C. Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch.

D. Enable detailed monitoring in the EC2 console. Create an Amazon CloudWatch SwapUtilization custom metric. Monitor SwapUtilization metrics in CloudWatch.

A

Option A (Configuring a custom dimension for SwapUsage) is not a standard CloudWatch metric, and creating custom dimensions for metrics may introduce unnecessary complexity.

Option B (Using EC2 metadata to collect information and publish custom metrics) is feasible but requires additional scripting and customization compared to using the CloudWatch agent, which simplifies the process.

Option D (Enabling detailed monitoring and creating a custom SwapUtilization metric) involves enabling detailed monitoring at an additional cost and may not be as straightforward as using the CloudWatch agent for this specific purpose.

48
Q

A company has a web server running on an Amazon EC2 instance in a public subnet with an Elastic IP address. The default security group is assigned to the EC2 instance. The default network ACL has been modified to block all traffic. A solutions architect needs to make the web server accessible from everywhere on port
443. Which combination of steps will accomplish this task? (Choose two.)

A. Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0.

B. Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0.

C. Update the network ACL to allow TCP port 443 from source 0.0.0.0/0.

D. Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to destination 0.0.0.0/0.

E. Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and outbound TCP port 32768-65535 to destination 0.0.0.0/0.

A

To make the web server accessible from everywhere on port 443, you can follow these steps:

A. Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0. This allows incoming traffic on port 443 to your EC2 instance.

E. Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and outbound TCP port 32768-65535 to destination 0.0.0.0/0. This ensures that traffic on port 443 can flow both to and from your EC2 instance.

These steps will allow incoming traffic on port 443 and make the web server accessible from anywhere.

49
Q

A company must re-evaluate its need for the Amazon EC2 instances it currently has provisioned in an Auto Scaling group. At present, the Auto Scaling group is configured for a minimum of two instances and a maximum of four instances across two Availability Zones. A Solutions architect reviewed Amazon CloudWatch metrics and found that CPU utilization is consistently low for all the EC2 instances. What should the solutions architect recommend to maximize utilization while ensuring the application remains fault tolerant?

A. Remove some EC2 instances to increase the utilization of remaining instances.

B. Increase the Amazon Elastic Block Store (Amazon EBS) capacity of instances with less CPU utilization.

C. Modify the Auto Scaling group scaling policy to scale in and out based on a higher CPU utilization metric.

D. Create a new launch configuration that uses smaller instance types. Update the existing Auto Scaling group.

A

A. Removing some EC2 instances may reduce capacity but doesn’t address the issue of low CPU utilization.

B. Increasing the Amazon EBS capacity of instances won’t significantly impact CPU utilization if the existing instances are not CPU-bound.

C. Modifying the Auto Scaling group scaling policy based on a higher CPU utilization metric may lead to unnecessary scaling actions if CPU utilization fluctuates.

50
Q

A company has an application that posts messages to Amazon SQS. Another application polls the queue and processes the messages in an I/O-intensive operation. The company has a service level agreement (SLA) that specifies the maximum amount of time that can elapse between receiving the messages and responding to the users. Due to an increase in the number of messages, the company has difficulty meeting its SLA consistently. What should a solutions architect do to help improve the application’s processing time and ensure it can handle the load at any level?

A. Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with a larger size.

B. Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with an Amazon EC2 Dedicated Instance.

C. Create an Amazon Machine image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy to keep its aggregate CPU utilization below 70%.

D. Create an Amazon Machine Image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy based on the age of the oldest message in the SQS queue.

A

A. Creating an AMI from the existing instance and replacing it with a larger size may provide more resources, but it doesn’t address the efficiency and scalability issues in a dynamic way.

B. Creating an AMI from the instance and replacing it with an Amazon EC2 Dedicated Instance doesn’t necessarily improve efficiency or scalability for I/O-intensive operations.

C. Creating an Amazon Machine image (AMI) from the instance used for processing, which focuses on CPU utilization, may not be the best fit if the bottleneck is I/O-intensive operations, as it doesn’t directly address the root cause.