Q051-100 Flashcards

1
Q

A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon Elastic Block Store
(Amazon EBS) volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?

A. Copy the data so both EBS volumes contain all the documents.

B. Configure the Application Load Balancer to direct a user to the server with the documents.

C. Copy the data from both EBS volumes to Amazon Elastic File System (Amazon EFS). Modify the application to save new documents to Amazon Elastic File System (Amazon EFS).

D. Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server.

A

A. Copying the data so that both EBS volumes contain all the documents would lead to data duplication and synchronization challenges. It doesn’t address the issue of load balancing and ensuring users can see all their documents seamlessly.

B. Configuring the Application Load Balancer to direct a user to a specific server doesn’t solve the problem, as it would still result in users seeing only a subset of their documents depending on which server they are directed to.

Option D, while feasible, is more complex to implement and maintain. You would need to ensure that both servers are kept in sync with the latest document changes, which can introduce synchronization challenges.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company is planning to use Amazon S3 to store images uploaded by its users. The images must be encrypted at rest in Amazon S3. The company does not want to spend time managing and rotating the keys, but it does want to control who can access those keys.
What should a solutions architect use to accomplish this?

A. Server-Side Encryption with keys stored in an S3 bucket

B. Server-Side Encryption with Customer-Provided Keys (SSE-C)

C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

D. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)

A

A. Server-Side Encryption with keys stored in an S3 bucket would require you to manage and rotate the keys yourself, which is not what you want to do. Additionally, it may not provide the same level of control over access to the keys as AWS KMS provides.

B. Server-Side Encryption with Customer-Provided Keys (SSE-C) allows you to provide your own encryption keys, but you are responsible for managing and rotating these keys, which is something you want to avoid.

C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) allows Amazon S3 to manage the encryption keys, but it doesn’t provide you with control over access to the keys, which is a requirement in this case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company is running an ecommerce application on Amazon EC2. The application consists of a stateless web tier that requires a minimum of 10 instances, and a peak of 250 instances to support the application’s usage. The application requires 50 instances 80% of the time.
Which solution should be used to minimize costs?

A. Purchase Reserved Instances to cover 250 instances.

B. Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances.

C. Purchase On-Demand Instances to cover 40 instances. Use Spot Instances to cover the remaining instances.

D. Purchase Reserved Instances to cover 50 instances. Use On-Demand and Spot Instances to cover the remaining instances.

A

A. Purchasing Reserved Instances to cover the maximum peak of 250 instances would be costly and may lead to underutilized Reserved Instances during non-peak periods, resulting in unnecessary expenses.

B. Purchasing Reserved Instances to cover 80 instances would leave you underprovisioned during peak times when you require 250 instances, and using Spot Instances to cover the remaining instances during peak times could be less cost-effective.

C. Purchasing On-Demand Instances to cover 40 instances and using Spot Instances for the remaining instances might not provide enough capacity during peak times when you need 250 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company has deployed an API in a VPC behind an internet-facing Application Load Balancer (ALB). An application that consumes the API as a client is deployed in a second account in private subnets behind a NAT gateway. When requests to the client application increase, the NAT gateway costs are higher than expected. A solutions architect has configured the ALB to be internal.
Which combination of architectural changes will reduce the NAT gateway costs? (Choose two.)

A. Configure a VPC peering connection between the two VPCs. Access the API using the private address.

B. Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address.

C. Configure a ClassicLink connection for the API into the client VPC. Access the API using the ClassicLink address.

D. Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address.

E. Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address.

A

Option B (Configure an AWS Direct Connect connection between the two VPCs) is not typically used to reduce NAT gateway costs and can be a more complex and expensive solution compared to VPC peering or PrivateLink in this context.

Option C (Configure a ClassicLink connection for the API into the client VPC) is used for linking EC2-Classic instances to a Virtual Private Cloud (VPC) and is not relevant to this scenario.

Option E (Configure an AWS Resource Access Manager connection between the two accounts) doesn’t directly address the NAT gateway cost issue and is not a typical solution for reducing NAT gateway costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A solutions architect is tasked with transferring 750 TB of data from an on-premises network-attached file system located at a branch office Amazon S3 Glacier.
The migration must not saturate the on-premises 1 Mbps internet connection.
Which solution will meet these requirements?

A. Create an AWS site-to-site VPN tunnel to an Amazon S3 bucket and transfer the files directly. Transfer the files directly by using the AWS CLI.

B. Order 10 AWS Snowball Edge Storage Optimized devices, and select an S3 Glacier vault as the destination.

C. Mount the network-attached file system to an S3 bucket, and copy the files directly. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.

D. Order 10 AWS Snowball Edge Storage Optimized devices, and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.

A

A. Creating an AWS site-to-site VPN tunnel and transferring the files directly to Amazon S3 may not be feasible given the limited 1 Mbps internet connection. It could saturate the connection and take a significant amount of time to complete the transfer.

B. AWS Snowball Edge Storage Optimized devices are designed for offline data transfer in situations where high data volumes need to be moved. Using Snowball Edge devices allows you to physically transfer the data to AWS without saturating your internet connection. The data is then uploaded to your specified Amazon S3 bucket. Transitioning the data to Amazon S3 Glacier through a lifecycle policy is also a valid approach for cost optimization.

C. While mounting the network-attached file system to an S3 bucket and copying the files directly to S3 is a possible approach, it might not be suitable for this scenario if it leads to saturating the 1 Mbps internet connection for an extended period.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and a database runs on the private subnet. The web application instances and the database are running in a single Availability Zone (AZ).
Which combination of steps should a solutions architect take to provide high availability for this architecture? (Choose two.)

A. Create new public and private subnets in the same AZ for high availability.

B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs.

C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer.

D. Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ.

E. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment.

A

A. Creating new public and private subnets in the same AZ for high availability does not provide fault tolerance because both subnets are still within the same AZ. High availability typically involves distributing resources across different AZs.

C. Adding the existing web application instances to an Auto Scaling group behind an Application Load Balancer without spanning multiple AZs does not fully address high availability. You need to ensure that instances are spread across multiple AZs to achieve better fault tolerance.

D. Creating new public and private subnets in a new AZ and creating a database using Amazon EC2 in one AZ does not provide high availability for the database. It’s essential to utilize Amazon RDS with multi-AZ deployment or another highly available database configuration to ensure database availability across different AZs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A solutions architect is implementing a document review application using an Amazon S3 bucket for storage. The solution must prevent an accidental deletion of the documents and ensure that all versions of the documents are available. Users must be able to download, modify, and upload documents.
Which combination of actions should be taken to meet these requirements? (Choose two.)

A. Enable a read-only bucket ACL.

B. Enable versioning on the bucket.

C. Attach an IAM policy to the bucket.

D. Enable MFA Delete on the bucket.

E. Encrypt the bucket using AWS KMS.

A

A. Enabling a read-only bucket ACL restricts all write actions to the bucket, which would not allow users to modify or upload documents. This is not suitable for the stated requirements.

D. Enabling MFA Delete is a security measure that requires multi-factor authentication (MFA) for object deletions. It adds an extra layer of protection for deletions but may not be necessary to meet the requirements.

E. Encrypting the bucket using AWS KMS is a security measure to protect data at rest. While it’s a good practice for security, it is not directly related to preventing accidental deletions or ensuring document versions are available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

An application hosted on AWS is experiencing performance problems, and the application vendor wants to perform an analysis of the log file to troubleshoot further. The log file is stored on Amazon S3 and is 10 GB in size. The application owner will make the log file available to the vendor for a limited time.
What is the MOST secure way to do this?

A. Enable public read on the S3 object and provide the link to the vendor.

B. Upload the file to Amazon WorkDocs and share the public link with the vendor.

C. Generate a presigned URL and have the vendor download the log file before it expires.

D. Create an IAM user for the vendor to provide access to the S3 bucket and the application. Enforce multi-factor authentication.

A

A. Enabling public read on the S3 object and providing the link to the vendor is not secure as it makes the log file accessible to anyone with the link, which can lead to unauthorized access.

B. Uploading the file to Amazon WorkDocs and sharing the public link with the vendor might be a convenient option, but it may not provide the same level of control and security as a presigned URL. Also, WorkDocs may not be the most suitable service for this specific use case.

D. Creating an IAM user for the vendor with access to the S3 bucket is generally secure, but enforcing multi-factor authentication (MFA) could be complex and might not be practical for the vendor. Additionally, it’s often preferable to avoid creating IAM users for external parties when temporary access is needed because managing such accounts can be challenging.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the company.
How should security groups be configured in this situation? (Choose two.)

A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0.

B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0.

C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.

D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier.

E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.

A

B. Configuring the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0 is not typically necessary for the web tier unless your application specifically requires outbound HTTPS communication to any destination, which is less common.

D. Configuring the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier is not a typical configuration and may not be needed for the described architecture. In most cases, outbound traffic from the database tier to the web tier is not necessary for basic web application functionality and should be restricted based on specific use cases.

E. Configuring the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier may introduce unnecessary and potentially insecure access to the database tier over port 443. It’s recommended to allow only necessary and specific ports for communication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company allows its developers to attach existing IAM policies to existing IAM roles to enable faster experimentation and agility. However, the security operations team is concerned that the developers could attach the existing administrator policy, which would allow the developers to circumvent any other security policies.
How should a solutions architect address this issue?

A. Create an Amazon SNS topic to send an alert every time a developer creates a new policy.

B. Use service control policies to disable IAM activity across all accounts in the organizational unit.

C. Prevent the developers from attaching any policies and assign all IAM duties to the security operations team.

D. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy.

A

A. Creating an Amazon SNS topic to send an alert every time a developer creates a new policy can be helpful for monitoring and auditing, but it doesn’t prevent the action from happening in the first place.

B. Using service control policies (SCPs) to disable IAM activity across all accounts in the organizational unit would be an overly restrictive measure and would likely hinder legitimate IAM activities.

Option C, preventing developers from attaching any policies and assigning all IAM duties to the security operations team, is not practical for agility and experimentation, and it would create a bottleneck for IAM tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company has a multi-tier application that runs six front-end web servers in an Amazon EC2 Auto Scaling group in a single Availability Zone behind an
Application Load Balancer (ALB). A solutions architect needs to modify the infrastructure to be highly available without modifying the application.
Which architecture should the solutions architect choose that provides high availability?

A. Create an Auto Scaling group that uses three instances across each of two Regions.

B. Modify the Auto Scaling group to use three instances across each of two Availability Zones.

C. Create an Auto Scaling template that can be used to quickly create more instances in another Region.

D. Change the ALB in front of the Amazon EC2 instances in a round-robin configuration to balance traffic to the web tier.

A

Option A (creating an Auto Scaling group across two Regions) would provide geographic redundancy but may introduce latency and complexity, and it could involve modifying the application to handle cross-region traffic.

Option C (creating an Auto Scaling template for another Region) is not necessary for addressing high availability within a single Region, and it doesn’t provide the same level of redundancy as using multiple Availability Zones.

Option D (changing the ALB to use round-robin) would distribute traffic evenly but wouldn’t address the high availability requirement effectively. Using multiple Availability Zones is a more robust solution for ensuring availability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company runs an application on a group of Amazon Linux EC2 instances. For compliance reasons, the company must retain all application log files for 7 years.
The log files will be analyzed by a reporting tool that must access all files concurrently.
Which storage solution meets these requirements MOST cost-effectively?

A. Amazon Elastic Block Store (Amazon EBS)

B. Amazon Elastic File System (Amazon EFS)

C. Amazon EC2 instance store

D. Amazon S3

A

Option B (Amazon EFS) might be considered, but it is typically more expensive for long-term storage compared to Amazon S3, and it may not provide the same durability and cost benefits.

Options A (Amazon EBS) and C (Amazon EC2 instance store) are not suitable for long-term storage of log files because they do not offer the durability, scalability, and cost-effectiveness of Amazon S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A media streaming company collects real-time data and stores it in a disk-optimized database system. The company is not getting the expected throughput and wants an in-memory database storage solution that performs faster and provides high availability using data replication.
Which database should a solutions architect recommend?

A. Amazon RDS for MySQL

B. Amazon RDS for PostgreSQL.

C. Amazon ElastiCache for Redis

D. Amazon ElastiCache for Memcached

A

Options A (Amazon RDS for MySQL) and B (Amazon RDS for PostgreSQL) are relational database services and may not offer the same in-memory performance benefits as Redis for certain workloads. They are disk-optimized databases and not specifically designed for high-speed in-memory operations.

Option D (Amazon ElastiCache for Memcached) is another in-memory caching service, but it lacks some advanced features like data persistence and complex data structures offered by Redis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company hosts its product information webpages on AWS. The existing solution uses multiple Amazon C2 instances behind an Application Load Balancer in an
Auto Scaling group. The website also uses a custom DNS name and communicates with HTTPS only using a dedicated SSL certificate. The company is planning a new product launch and wants to be sure that users from around the world have the best possible experience on the new website.
What should a solutions architect do to meet these requirements?

A. Redesign the application to use Amazon CloudFront.

B. Redesign the application to use AWS Elastic Beanstalk.

C. Redesign the application to use a Network Load Balancer.

D. Redesign the application to use Amazon S3 static website hosting.

A

Option B (AWS Elastic Beanstalk) is a Platform as a Service (PaaS) offering that simplifies the deployment and management of web applications. While it can be used for web hosting, it doesn’t provide the global content delivery capabilities of CloudFront.

Option C (Network Load Balancer) is used for distributing incoming network traffic across targets within a VPC. It does not provide the content caching and global distribution capabilities needed to improve user experience worldwide.

Option D (Amazon S3 static website hosting) is suitable for hosting static content like HTML, CSS, and images but may not be the best choice for dynamic web applications that require backend processing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while adding and removing application nodes as needed based on the number of jobs to be processed. The processor application is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored.
Which design should the solutions architect use?

A. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage.

B. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage.

C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.

D. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic.

A

Option A (using SNS topics) is not ideal for job processing, as SNS is designed for pub/sub messaging and doesn’t provide the durability and job processing capabilities of SQS.

Option B (scaling based on network usage) is not typically used for job processing scaling, and network usage may not directly correlate with the number of jobs to be processed.

Option D (using SNS topics) is similar to option A and does not provide the durability and job processing capabilities needed for this use case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A marketing company is storing CSV files in an Amazon S3 bucket for statistical analysis. An application on an Amazon EC2 instance needs permission to efficiently process the CSV data stored in the S3 bucket.
Which action will MOST securely grant the EC2 instance access to the S3 bucket?

A. Attach a resource-based policy to the S3 bucket.

B. Create an IAM user for the application with specific permissions to the S3 bucket.

C. Associate an IAM role with least privilege permissions to the EC2 instance profile.

D. Store AWS credentials directly on the EC2 instance for applications on the instance to use for API calls.

A

A. Attaching a resource-based policy to the S3 bucket can grant broad access to anyone or any resource with the correct credentials. It may not provide the fine-grained control and least privilege principles that IAM roles offer.

B. Creating an IAM user for the application would require the storage of AWS access keys on the EC2 instance, which can pose a security risk if those credentials are compromised. IAM roles with temporary security credentials are a more secure option.

D. Storing AWS credentials directly on the EC2 instance can be risky, as it increases the likelihood of unauthorized access to those credentials if the instance is compromised. IAM roles provide a more secure and manageable way to grant access to AWS resources for EC2 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A company has on-premises servers that run a relational database. The database serves high-read traffic for users in different locations. The company wants to migrate the database to AWS with the least amount of effort. The database solution must support high availability and must not affect the company’s current traffic flow.
Which solution meets these requirements?

A. Use a database in Amazon RDS with Multi-AZ and at least one read replica.

B. Use a database in Amazon RDS with Multi-AZ and at least one standby replica.

C. Use databases that are hosted on multiple Amazon EC2 instances in different AWS Regions.

D. Use databases that are hosted on Amazon EC2 instances behind an Application Load Balancer in different Availability Zones.

A

Option B (using Amazon RDS with Multi-AZ and at least one standby replica) is similar to option A, but the term “standby replica” is not typically used with Amazon RDS. Multi-AZ itself provides a standby instance.

Option C (hosting databases on multiple Amazon EC2 instances in different AWS Regions) can be complex and may introduce additional latency for users in different locations. It’s also not clear how it would seamlessly handle the migration without affecting traffic flow.

Option D (hosting databases on Amazon EC2 instances behind an Application Load Balancer in different Availability Zones) requires manual management of database instances and high availability solutions, which may not be as easy to set up and maintain as Amazon RDS with Multi-AZ.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company’s application is running on Amazon EC2 instances within an Auto Scaling group behind an Elastic Load Balancer. Based on the application’s history, the company anticipates a spike in traffic during a holiday each year. A solutions architect must design a strategy to ensure that the Auto Scaling group proactively increases capacity to minimize any performance impact on application users.
Which solution will meet these requirements?

A. Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds 90%.

B. Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand.

C. Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak demand period.

D. Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when there are autoscaling:EC2_INSTANCE_LAUNCH events.

A

Option A (creating a CloudWatch alarm based on CPU utilization) is reactive and may lead to delayed scaling since it relies on threshold breaches. It may not provide the required proactive capacity increase.

Option C (increasing the minimum and maximum number of EC2 instances) could work but would require manual intervention to adjust the group’s capacity, which may not be as efficient as using a scheduled action.

Option D (configuring an Amazon SNS notification for EC2_INSTANCE_LAUNCH events) is related to monitoring and notification and does not provide the proactive capacity scaling required to handle the anticipated traffic spike.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A company hosts an application on multiple Amazon EC2 instances. The application processes messages from an Amazon SQS queue, writes for an Amazon

RDS table, and deletes -
the message from the queue. Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?

A. Use the CreateQueue API call to create a new queue.

B. Use the AddPermission API call to add appropriate permissions.

C. Use the ReceiveMessage API call to set an appropriate wait time.

D. Use the ChangeMessageVisibility API call to increase the visibility timeout.

A

A. Using the CreateQueue API call to create a new queue would not address the issue of message duplication. It creates a new queue but doesn’t prevent duplicates.

B. Using the AddPermission API call is related to granting or managing permissions for different AWS accounts to access the queue and does not address message duplication.

C. Using the ReceiveMessage API call to set an appropriate wait time determines how long the ReceiveMessage call should wait for messages to become available in the queue but does not directly address message duplication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

An Amazon EC2 administrator created the following policy associated with an IAM group containing several users:
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: “ec2:TerminateInstances”,
“Resource”: “”,
“Condition”: {
“IpAddress”: {
“aws:SourceIp”: “10.100.100.0/24”
}
}
},
{
“Effect”: “Deny”,
“Action”: “ec2:
”,
“Resource”: “*”,
“Condition”: {
“StringNotEquals”: {
“ec2:Region”: “us-east-1”
}
}
}
]
}

What is the effect of this policy?

A. Users can terminate an EC2 instance in any AWS Region except us-east-1.

B. Users can terminate an EC2 instance with the IP address 10.100.100.1 in the us-east-1 Region.

C. Users can terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.100.100.254.

D. Users cannot terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.100.100.254.

A

The first statement allows users to terminate EC2 instances only when their source IP is within the range “10.100.100.0/24”.

The second statement denies all EC2 actions in regions other than “us-east-1”.

So, users with IPs in the range “10.100.100.0/24” are allowed to terminate instances, but only in the “us-east-1” region. The source IP “10.100.100.254” falls within the range and satisfies both conditions, allowing termination of instances in the “us-east-1” region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A solutions architect is optimizing a website for an upcoming musical event. Videos of the performances will be streamed in real time and then will be available on demand. The event is expected to attract a global online audience.
Which service will improve the performance of both the real-time and on-demand steaming?

A. Amazon CloudFront

B. AWS Global Accelerator

C. Amazon Route S3

D. Amazon S3 Transfer Acceleration

A

Option B, AWS Global Accelerator, is primarily designed for improving the availability and fault tolerance of applications by using static IP addresses and routing traffic over the AWS global network, but it is not specifically focused on content delivery like Amazon CloudFront.

Option C, Amazon Route S3, is not a service for content delivery or streaming but rather a DNS-based routing service.

Option D, Amazon S3 Transfer Acceleration, improves the upload speed to Amazon S3 but does not provide content delivery and acceleration for streaming media.

22
Q

A company has a three-tier image-sharing application. It uses an Amazon EC2 instance for the front-end layer, another for the backend tier, and a third for the
MySQL database. A solutions architect has been tasked with designing a solution that is highly available, and requires the least amount of changes to the application
Which solution meets these requirements?

A. Use Amazon S3 to host the front-end layer and AWS Lambda functions for the backend layer. Move the database to an Amazon DynamoDB table and use Amazon S3 to store and serve users’ images.

B. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with multiple read replicas to store and serve users’ images.

C. Use Amazon S3 to host the front-end layer and a fleet of Amazon EC2 instances in an Auto Scaling group for the backend layer. Move the database to a memory optimized instance type to store and serve users’ images.

D. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with a Multi-AZ deployment. Use Amazon S3 to store and serve users’ images.

A

A. Use Amazon S3 to host the front-end layer and AWS Lambda functions for the backend layer. Move the database to an Amazon DynamoDB table and use Amazon S3 to store and serve users’ images.

This option suggests a complete architectural shift by moving to AWS Lambda and DynamoDB, which requires extensive changes to the application. DynamoDB is a NoSQL database, and the backend logic would need to be re-implemented using Lambda functions.

B. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with multiple read replicas to store and serve users’ images.

While this option improves database availability with RDS read replicas, it still doesn’t efficiently address image storage and serving, as it proposes using the RDS instance for this purpose.

C. Use Amazon S3 to host the front-end layer and a fleet of Amazon EC2 instances in an Auto Scaling group for the backend layer. Move the database to a memory-optimized instance type to store and serve users’ images.

This option improves front-end hosting with S3 but doesn’t inherently provide high availability for the backend. Storing and serving images from the database instance is not recommended, as it can impact performance. Additionally, changing the database instance type introduces some modifications to the application.

23
Q

A solutions architect is designing a system to analyze the performance of financial markets while the markets are closed. The system will run a series of compute- intensive jobs for 4 hours every night. The time to complete the compute jobs is expected to remain constant, and jobs cannot be interrupted once started. Once completed, the system is expected to run for a minimum of 1 year.
Which type of Amazon EC2 instances should be used to reduce the cost of the system?

A. Spot Instances

B. On-Demand Instances

C. Standard Reserved Instances

D. Scheduled Reserved Instances

A

A. Spot Instances: Spot Instances are generally cost-effective but not suitable when jobs cannot be interrupted once started. They can be terminated by AWS with short notice when the spot price exceeds your bid, potentially causing job disruptions. For workloads that require uninterrupted operation and consistent performance, Spot Instances are not recommended.

B. On-Demand Instances: On-Demand Instances offer flexibility and immediate availability but are typically the most expensive option for long-term, predictable workloads. They are billed on an hourly or per-second basis, making them costly for workloads with consistent usage patterns like the nightly compute jobs in this scenario.

C. Standard Reserved Instances: Standard Reserved Instances provide cost savings compared to On-Demand Instances in exchange for a one-year or three-year commitment. However, they may not align well with workloads that have specific time windows and durations.

24
Q

A company built a food ordering application that captures user data and stores it for future analysis. The application’s static front end is deployed on an Amazon
EC2 instance. The front-end application sends the requests to the backend application running on separate EC2 instance. The backend application then stores the data in Amazon RDS.
What should a solutions architect do to decouple the architecture and make it scalable?

A. Use Amazon S3 to serve the front-end application, which sends requests to Amazon EC2 to execute the backend application. The backend application will process and store the data in Amazon RDS.

B. Use Amazon S3 to serve the front-end application and write requests to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe Amazon EC2 instances to the HTTP/HTTPS endpoint of the topic, and process and store the data in Amazon RDS.

C. Use an EC2 instance to serve the front end and write requests to an Amazon SQS queue. Place the backend instance in an Auto Scaling group, and scale based on the queue depth to process and store the data in Amazon RDS.

D. Use Amazon S3 to serve the static front-end application and send requests to Amazon API Gateway, which writes the requests to an Amazon SQS queue. Place the backend instances in an Auto Scaling group, and scale based on the queue depth to process and store the data in Amazon RDS.

A

A. Using Amazon S3 to serve the front-end and having the front-end send requests directly to Amazon EC2 for the backend: This option lacks decoupling between the front-end and back-end, as the front-end directly calls the backend EC2 instances. It does not address the scalability and fault tolerance requirements effectively.

B. Using Amazon S3 to serve the front-end and having the front-end write requests to an Amazon SNS topic: While this option introduces some level of decoupling, it still requires Amazon EC2 instances to subscribe to the topic, and there may not be efficient scaling mechanisms in place. It also adds unnecessary complexity with SNS for this use case.

C. Using an EC2 instance to serve the front-end and having the front-end write requests to an Amazon SQS queue: This option does introduce decoupling to some extent by using an SQS queue, but it still relies on a single EC2 instance to serve the front-end, which may not be the most scalable or cost-effective solution. Scaling the backend based solely on queue depth might not provide optimal scaling for varying traffic loads.

25
Q

A solutions architect needs to design a managed storage solution for a company’s application that includes high-performance machine learning functionality. This application runs on AWS Fargate and the connected storage needs to have concurrent access to files and deliver high performance.
Which storage option should the solutions architect recommend?

A. Create an Amazon S3 bucket for the application and establish an IAM role for Fargate to communicate with Amazon S3.

B. Create an Amazon FSx for Lustre file share and establish an IAM role that allows Fargate to communicate with FSx for Lustre.

C. Create an Amazon Elastic File System (Amazon EFS) file share and establish an IAM role that allows Fargate to communicate with Amazon Elastic File System (Amazon EFS).

D. Create an Amazon Elastic Block Store (Amazon EBS) volume for the application and establish an IAM role that allows Fargate to communicate with Amazon Elastic Block Store (Amazon EBS).

A

Option A (Amazon S3) is not the best choice for concurrent access and high-performance file access. While Amazon S3 is highly durable and scalable, it is an object storage service, and accessing files from S3 typically involves copying them to local storage, which may not provide the required performance for machine learning workloads.

Option C (Amazon EFS) is a shared file storage service, but its performance may not match the requirements for high-performance machine learning workloads with concurrent access.

Option D (Amazon EBS) provides block storage and is not designed for concurrent access to files by multiple compute instances. It may not be suitable for a high-performance machine learning scenario with concurrent access needs.

26
Q

A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours. The company wants to use these data points in its existing analytics platform. A solutions architect must determine the most viable multi-tier option to support this architecture. The data points must be accessible from the REST API.
Which action meets these requirements for storing and retrieving location data?

A. Use Amazon Athena with Amazon S3.

B. Use Amazon API Gateway with AWS Lambda.

C. Use Amazon QuickSight with Amazon Redshift.

D. Use Amazon API Gateway with Amazon Kinesis Data Analytics.

A

A. Amazon Athena with Amazon S3: Amazon Athena is a query service that allows you to analyze data stored in Amazon S3 using standard SQL queries. While it’s great for ad-hoc querying of data, it’s not designed for real-time data retrieval or serving data via REST APIs.

C. Amazon QuickSight with Amazon Redshift: Amazon QuickSight is a business intelligence tool, and Amazon Redshift is a data warehousing service. While they are suitable for analytics and reporting, they are not designed for real-time tracking or REST API integration.

D. Amazon API Gateway with Amazon Kinesis Data Analytics: Amazon Kinesis Data Analytics is more focused on real-time data streaming and complex event processing, making it less suitable for tracking the location of bicycles and serving data via REST APIs. It’s better suited for scenarios where real-time processing of continuous data streams is required.

27
Q

A solutions architect is designing a web application that will run on Amazon EC2 instances behind an Application Load Balancer (ALB). The company strictly requires that the application be resilient against malicious internet activity and attacks, and protect against new common vulnerabilities and exposures.
What should the solutions architect recommend?

A. Leverage Amazon CloudFront with the ALB endpoint as the origin.

B. Deploy an appropriate managed rule for AWS WAF and associate it with the ALB.

C. Subscribe to AWS Shield Advanced and ensure common vulnerabilities and exposures are blocked.

D. Configure network ACLs and security groups to allow only ports 80 and 443 to access the EC2 instances.

A

Option A (Leverage Amazon CloudFront with the ALB endpoint as the origin) can enhance performance and caching but does not provide the same level of security protection as AWS WAF. While CloudFront can help distribute traffic and protect against some types of attacks, it does not have the focused security features of AWS WAF.

Option C (Subscribe to AWS Shield Advanced and ensure common vulnerabilities and exposures are blocked) focuses on DDoS protection and may not address the specific need to protect against common web vulnerabilities and attacks that AWS WAF is designed for.

Option D (Configure network ACLs and security groups to allow only ports 80 and 443 to access the EC2 instances) addresses network-level security but does not provide the application-layer protection and rule-based filtering capabilities that AWS WAF offers.

28
Q

A company has an application that calls AWS Lambda functions. A code review shows that database credentials are stored in a Lambda function’s source code, which violates the company’s security policy. The credentials must be securely stored and must be automatically rotated on an ongoing basis to meet security policy requirements.
What should a solutions architect recommend to meet these requirements in the MOST secure manner?

A. Store the password in AWS CloudHSM. Associate the Lambda function with a role that can use the key ID to retrieve the password from CloudHSM. Use CloudHSM to automatically rotate the password.

B. Store the password in AWS Secrets Manager. Associate the Lambda function with a role that can use the secret ID to retrieve the password from Secrets Manager. Use Secrets Manager to automatically rotate the password.

C. Store the password in AWS Key Management Service (AWS KMS). Associate the Lambda function with a role that can use the key ID to retrieve the password from AWS KMS. Use AWS KMS to automatically rotate the uploaded password.

D. Move the database password to an environment variable that is associated with the Lambda function. Retrieve the password from the environment variable by invoking the function. Create a deployment script to automatically rotate the password.

A

Option A (AWS CloudHSM) might be suitable for sensitive data storage, but it may involve additional complexity and costs compared to AWS Secrets Manager. It may be an overkill for storing and rotating database credentials.

Option C (AWS KMS) is primarily designed for encryption and key management rather than secret storage. While it’s possible to use KMS to encrypt secrets, AWS Secrets Manager is a more specialized service for secret management, including rotation.

Option D (Environment variable) is less secure because environment variables can be accessed by anyone with access to the Lambda function’s configuration. Additionally, implementing automatic rotation of secrets would require custom scripting, which could be error-prone and less secure compared to the built-in rotation capability of AWS Secrets Manager.

29
Q

A company is managing health records on-premises. The company must keep these records indefinitely, disable any modifications to the records once they are stored, and granularly audit access at all levels. The chief technology officer (CTO) is concerned because there are already millions of records not being used by any application, and the current infrastructure is running out of space. The CTO has requested a solutions architect design a solution to move existing data and support future records.
Which services can the solutions architect recommend to meet these requirements?

A. Use AWS DataSync to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with data events.

B. Use AWS Storage Gateway to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with management events.

C. Use AWS DataSync to move existing data to AWS. Use Amazon S3 to store existing and new data. Enable Amazon S3 object lock and enable AWS CloudTrail with management events.

D. Use AWS Storage Gateway to move existing data to AWS. Use Amazon Elastic Block Store (Amazon EBS) to store existing and new data. Enable Amazon S3 object lock and enable Amazon S3 server access logging.

A

Option A also includes Amazon S3 object lock and AWS CloudTrail, but it suggests using AWS DataSync, which is typically used for data transfer, not migration of on-premises data.

Option C suggests using AWS DataSync to move data, which is not the best fit for this scenario since Storage Gateway is more suitable for integrating on-premises data with AWS.

Option D suggests using Amazon EBS, which is a block storage service, not suitable for storing large volumes of object data like health records. Additionally, it mentions Amazon S3 server access logging, but AWS CloudTrail is more comprehensive for audit logging and monitoring of management events.

30
Q

A company wants to use Amazon S3 for the secondary copy of its on-premises dataset. The company would rarely need to access this copy. The storage solution’s cost should be minimal.
Which storage solution meets these requirements?

A. S3 Standard

B. S3 Intelligent-Tiering

C. S3 Standard-Infrequent Access (S3 Standard-IA)

D. S3 One Zone-Infrequent Access (S3 One Zone-IA)

A

A. S3 Standard: S3 Standard is designed for frequently accessed data and offers low-latency access. It is more expensive than other storage classes like S3 Standard-IA and S3 One Zone-IA, making it less cost-effective for rarely accessed data.

B. S3 Intelligent-Tiering: S3 Intelligent-Tiering is designed for data with varying access patterns, automatically moving objects between access tiers. While it provides flexibility for changing access patterns, it may incur additional costs due to tiering, and it may not be as cost-effective as S3 One Zone-IA for rarely accessed data.

C. S3 Standard-Infrequent Access (S3 Standard-IA): S3 Standard-IA is designed for infrequently accessed data but still maintains low-latency access. While it’s suitable for infrequently accessed data, it may have slightly higher storage costs compared to S3 One Zone-IA, which is optimized for minimal costs.

31
Q

A company’s operations team has an existing Amazon S3 bucket configured to notify an Amazon SQS queue when new objects are created within the bucket. The development team also wants to receive events when new objects are created. The existing operations team workflow must remain intact.
Which solution would satisfy these requirements?

A. Create another SQS queue. Update the S3 events in the bucket to also update the new queue when a new object is created.

B. Create a new SQS queue that only allows Amazon S3 to access the queue. Update Amazon S3 to update this queue when a new object is created.

C. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic. Updates both queues to poll Amazon SNS.

D. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic. Add subscriptions for both queues in the topic.

A

Option A would require modifying the S3 events directly, potentially impacting the existing workflow of the operations team.

Option B would create a new SQS queue only accessible by Amazon S3, but it doesn’t provide a solution for the development team to receive events.

Option C introduces an unnecessary polling step and doesn’t directly address the requirement of allowing both teams to receive events independently.

32
Q

An application runs on Amazon EC2 instances in private subnets. The application needs to access an Amazon DynamoDB table. What is the MOST secure way to access the table while ensuring that the traffic does not leave the AWS network?

A. Use a VPC endpoint for DynamoDB.

B. Use a NAT gateway in a public subnet.

C. Use a NAT instance in a private subnet.

D. Use the internet gateway attached to the VPC.

A

Options B, C, and D involve NAT gateways, NAT instances, or internet gateways, which introduce a path to the public internet and may expose your traffic to potential security risks. These options are not as secure as using a VPC endpoint when your goal is to keep traffic within the AWS network and enhance security.

33
Q

A company built an application that lets users check in to places they visit, rank the places, and add reviews about their experiences. The application is successful with a rapid increase in the number of users every month.
The chief technology officer fears the database supporting the current Infrastructure may not handle the new load the following month because the single Amazon
RDS for MySQL instance has triggered alarms related to resource exhaustion due to read requests.
What can a solutions architect recommend to prevent service Interruptions at the database layer with minimal changes to code?

A. Create RDS read replicas and redirect read-only traffic to the read replica endpoints. Enable a Multi-AZ deployment.

B. Create an Amazon EMR cluster and migrate the data to a Hadoop
Distributed File System (HDFS) with a replication factor of 3.

C. Create an Amazon ElastiCache cluster and redirect all read-only traffic to the cluster. Set up the cluster to be deployed in three Availability Zones.

D. Create an Amazon DynamoDB table to replace the RDS instance and redirect all read-only traffic to the DynamoDB table. Enable DynamoDB Accelerator to offload traffic from the main table.

A

Option B suggests migrating data to Amazon EMR with HDFS, which would involve a significant rearchitecting of the database layer.

Option C recommends using Amazon ElastiCache, which is an in-memory data store, and it might not be a suitable replacement for a relational database like Amazon RDS without substantial code changes.

Option D proposes replacing RDS with Amazon DynamoDB, which is a NoSQL database and would likely require substantial code changes to adapt to the different data model and query patterns.

34
Q

A company is looking for a solution that can store video archives in AWS from old news footage. The company needs to minimize costs and will rarely need to restore these files. When the files are needed, they must be available in a maximum of five minutes.
What is the MOST cost-effective solution?

A. Store the video archives in Amazon S3 Glacier and use Expedited retrievals.

B. Store the video archives in Amazon S3 Glacier and use Standard retrievals.

C. Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA).

D. Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).

A

A. Store the video archives in Amazon S3 Glacier and use Expedited retrievals: Expedited retrievals in S3 Glacier are faster but also more expensive compared to Standard retrievals. Since cost optimization is a significant consideration, using Expedited retrievals would not be the most cost-effective choice for rarely accessed data.

C. Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA): S3 Standard-IA is designed for data with more frequent access than Glacier, and its storage costs are higher. While it offers faster access compared to Glacier, it may not be the most cost-effective solution for archives with rare access requirements.

D. Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA): S3 One Zone-IA is suitable for infrequently accessed data, but it has a limitation of data durability because it stores data in a single availability zone. This option may not provide the desired level of data durability, and the retrieval speed is not guaranteed to be as fast as Glacier Standard retrievals.

35
Q

A company has created a VPC with multiple private subnets in multiple Availability Zones (AZs) and one public subnet in one of the AZs. The public subnet is used to launch a NAT gateway. There are instances in the private subnets that use a NAT gateway to connect to the internet. In case of an AZ failure, the company wants to ensure that the instances are not all experiencing internet connectivity issues and that there is a backup plan ready.
Which solution should a solutions architect recommend that is MOST highly available?

A. Create a new public subnet with a NAT gateway in the same AZ. Distribute the traffic between the two NAT gateways.

B. Create an Amazon EC2 NAT instance in a new public subnet. Distribute the traffic between the NAT gateway and the NAT instance.

C. Create public subnets in each AZ and launch a NAT gateway in each subnet. Configure the traffic from the private subnets in each AZ to the respective NAT gateway.

D. Create an Amazon EC2 NAT instance in the same public subnet. Replace the NAT gateway with the NAT instance and associate the instance with an Auto Scaling group with an appropriate scaling policy.

A

Option A suggests creating a new public subnet with a NAT gateway in the same AZ, but this doesn’t provide redundancy across AZs, making it less highly available.

Option B involves using an Amazon EC2 NAT instance, which requires more manual management and doesn’t inherently provide the same level of redundancy as multiple NAT gateways in different AZs.

Option D suggests using an Amazon EC2 NAT instance with Auto Scaling, but it still relies on a single instance, which can be a single point of failure, and it doesn’t offer the same level of redundancy as Option C.

36
Q

A healthcare company stores highly sensitive patient records. Compliance requires that multiple copies be stored in different locations. Each record must be stored for 7 years. The company has a service level agreement (SLA) to provide records to government agencies immediately for the first 30 days and then within
4 hours of a request thereafter.
What should a solutions architect recommend?

A. Use Amazon S3 with cross-Region replication enabled. After 30 days, transition the data to Amazon S3 Glacier using lifecycle policy.

B. Use Amazon S3 with cross-origin resource sharing (CORS) enabled. After 30 days, transition the data to Amazon S3 Glacier using a lifecycle policy.

C. Use Amazon S3 with cross-Region replication enabled. After 30 days, transition the data to Amazon S3 Glacier Deep Achieve using a lifecycle policy.

D. Use Amazon S3 with cross-origin resource sharing (CORS) enabled. After 30 days, transition the data to Amazon S3 Glacier Deep Archive using a lifecycle policy.

A

Options B and D do not address the specific requirement of transitioning the data to Amazon S3 Glacier or Amazon S3 Glacier Deep Archive, which offer cost-effective long-term storage solutions but with longer retrieval times.

Option C suggests transitioning data to Amazon S3 Glacier Deep Archive, which may provide the lowest cost for long-term storage but with significantly longer retrieval times. This may not meet the requirement of providing records within 4 hours of a request after the initial 30 days.

37
Q

A company recently deployed a new auditing system to centralize information about operating system versions, patching, and installed software for Amazon EC2 instances. A solutions architect must ensure all instances provisioned through EC2 Auto Scaling groups successfully send reports to the auditing system as soon as they are launched and terminated.
Which solution achieves these goals MOST efficiently?

A. Use a scheduled AWS Lambda function and run a script remotely on all EC2 instances to send data to the audit system.

B. Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system when instances are launched and terminated.

C. Use an EC2 Auto Scaling launch configuration to run a custom script through user data to send data to the audit system when instances are launched and terminated.

D. Run a custom script on the instance operating system to send data to the audit system. Configure the script to be executed by the EC2 Auto Scaling group when the instance starts and is terminated.

A

Option A (using a scheduled Lambda function) may introduce unnecessary complexity and delays, as it relies on periodic scheduling and remote execution.

Option C (using user data in the launch configuration) can work but might be less efficient because it runs the script for every instance, even if instances are terminated prematurely. Lifecycle hooks are better suited for precise control.

Option D (running a custom script on the instance OS) would require more manual configuration and may not be as efficient as using lifecycle hooks, which are designed for this specific use case.

38
Q

A company recently implemented hybrid cloud connectivity using AWS Direct Connect and is migrating data to Amazon S3. The company is looking for a fully managed solution that will automate and accelerate the replication of data between the on-premises storage systems and AWS storage services.
Which solution should a solutions architect recommend to keep the data private?

A. Deploy an AWS DataSync agent for the on-premises environment. Configure a sync job to replicate the data and connect it with an AWS service endpoint.

B. Deploy an AWS DataSync agent for the on-premises environment. Schedule a batch job to replicate point-in-time snapshots to AWS.

C. Deploy an AWS Storage Gateway volume gateway for the on-premises environment. Configure it to store data locally, and asynchronously back up point-in- time snapshots to AWS.

D. Deploy an AWS Storage Gateway file gateway for the on-premises environment. Configure it to store data locally, and asynchronously back up point-in-time snapshots to AWS.

A

Option B does not provide as much granularity for data transfer as Option A does. It mentions scheduling batch jobs but lacks details on the secure transfer of data while maintaining data privacy.

Options C and D, while mentioning AWS Storage Gateway, might not align with the requirement to keep data private since they involve storing data locally on-premises and asynchronously backing up snapshots to AWS, potentially exposing data.

39
Q

A company has 150 TB of archived image data stored on-premises that needs to be moved to the AWS Cloud within the next month. The company’s current network connection allows up to 100 Mbps uploads for this purpose during the night only.
What is the MOST cost-effective mechanism to move this data and meet the migration deadline?

A. Use AWS Snowmobile to ship the data to AWS.

B. Order multiple AWS Snowball devices to ship the data to AWS.

C. Enable Amazon S3 Transfer Acceleration and securely upload the data.

D. Create an Amazon S3 VPC endpoint and establish a VPN to upload the data.

A

Option A (AWS Snowmobile) is typically used for exabyte-scale data transfers and may not be cost-effective or necessary for 150 TB of data. It’s designed for much larger data volumes.

Option C (Amazon S3 Transfer Acceleration) accelerates data uploads to Amazon S3, but it still relies on your existing network bandwidth, which is limited to 100 Mbps during the night. This may not meet the migration deadline.

Option D (Amazon S3 VPC endpoint and VPN) does not address the limited network bandwidth issue, and relying solely on a VPN may not significantly improve data transfer speeds.

40
Q

A public-facing web application queries a database hosted on an Amazon EC2 instance in a private subnet. A large number of queries involve multiple table joins, and the application performance has been degrading due to an increase in complex queries. The application team will be performing updates to improve performance.
What should a solutions architect recommend to the application team? (Choose two.)

A. Cache query data in Amazon SQS

B. Create a read replica to offload queries

C. Migrate the database to Amazon Athena

D. Implement Amazon DynamoDB Accelerator to cache data.

E. Migrate the database to Amazon RDS

A

A. Caching query data in Amazon SQS (Simple Queue Service) is not a typical approach for improving database query performance. SQS is a message queue service used for decoupling components in a distributed system and is not designed for caching query results.

C. Migrating the database to Amazon Athena is not recommended for improving query performance in this context. Amazon Athena is a serverless query service for analyzing data stored in Amazon S3 using SQL queries. It is typically used for ad-hoc querying of data in S3 and may not be suitable for hosting a traditional relational database.

D. Implementing Amazon DynamoDB Accelerator (DAX) is not relevant in the context of an Amazon RDS-hosted database. DAX is designed for caching data for Amazon DynamoDB, a NoSQL database service, and is not applicable to an RDS-based relational database.

41
Q

A company is seeing access requests by some suspicious IP addresses. The security team discovers the requests are from different IP addresses under the same CIDR range.
What should a solutions architect recommend to the team?

A. Add a rule in the inbound table of the security to deny the traffic from that CIDR range.

B. Add a rule in the outbound table of the security group to deny the traffic from that CIDR range.

C. Add a deny rule in the inbound table of the network ACL with a lower number than other rules.

D. Add a deny rule in the outbound table of the network ACL with a lower rule number than other rules.

A

Option A is not recommended because adding a deny rule in the inbound table of a security group would be more appropriate for controlling access at the instance level, not at the network level.

Option B is not the best choice because adding a rule in the outbound table of a security group would control outbound traffic from the instances associated with the security group, which may not be effective in preventing incoming traffic from the specified CIDR range.

Option D is not suitable because network ACLs are primarily used for controlling incoming and outgoing traffic at the subnet level, and adding a deny rule to the outbound table is not an effective way to block incoming traffic from specific IP addresses or CIDR ranges.

42
Q

A company recently expanded globally and wants to make its application accessible to users in those geographic locations. The application is deployed on
Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. The company needs the ability to shift traffic from resources in one region to another.
What should a solutions architect recommend?

A. Configure an Amazon Route 53 latency routing policy.

B. Configure an Amazon Route 53 geolocation routing policy.

C. Configure an Amazon Route 53 geoproximity routing policy.

D. Configure an Amazon Route 53 multivalue answer routing policy.

A

Option A (Amazon Route 53 latency routing policy) is more focused on routing traffic based on the lowest latency to a set of resources, but it may not provide the granular control needed to shift traffic between regions.

Option B (Amazon Route 53 geolocation routing policy) routes traffic based on the geographic location of the user, but it doesn’t offer the same level of control and flexibility for shifting traffic between regions as the geoproximity routing policy does.

Option D (Amazon Route 53 multivalue answer routing policy) is used for distributing traffic across multiple resources in a simple round-robin fashion, but it doesn’t provide the geographic control and shifting capabilities required for this scenario.

43
Q

A company wants to replicate its data to AWS to recover in the event of a disaster. Today, a system administrator has scripts that copy data to a NFS share.
Individual backup files need to be accessed with low latency by application administrators to deal with errors in processing.
What should a solutions architect recommend to meet these requirements?

A. Modify the script to copy data to an Amazon S3 bucket instead of the on-premises NFS share.

B. Modify the script to copy data to an Amazon S3 Glacier Archive instead of the on-premises NFS share.

C. Modify the script to copy data to an Amazon Elastic File System (Amazon EFS) volume instead of the on-premises NFS share.

D. Modify the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance instead of the on-premises NFS share.

A

Option A (Modify the script to copy data to an Amazon S3 bucket) is a valid approach, but it doesn’t provide the low-latency access to individual backup files that the scenario requires. Data stored directly in Amazon S3 may not be as easily accessible via NFS.

Option B (Modify the script to copy data to an Amazon S3 Glacier Archive) is not suitable because Amazon S3 Glacier is designed for long-term archival storage and does not offer low-latency access.

Option C (Modify the script to copy data to an Amazon Elastic File System - EFS) is a valid option, but it may introduce additional complexity and cost compared to using AWS Storage Gateway for File Gateway, which is specifically designed for hybrid cloud storage scenarios.

44
Q

An application requires a development environment (DEV) and production environment (PROD) for several years. The DEV instances will run for 10 hours each day during normal business hours, while the PROD instances will run 24 hours each day. A solutions architect needs to determine a compute instance purchase strategy to minimize costs.
Which solution is the MOST cost-effective?

A. DEV with Spot Instances and PROD with On-Demand Instances

B. DEV with On-Demand Instances and PROD with Spot Instances

C. DEV with Scheduled Reserved Instances and PROD with Reserved Instances

D. DEV with On-Demand Instances and PROD with Scheduled Reserved Instances

A

Option A (DEV with Spot Instances and PROD with On-Demand Instances) might result in cost savings for DEV instances but can be less predictable and stable, which may not be suitable for a development environment.

Option B (DEV with On-Demand Instances and PROD with Spot Instances) might save costs for PROD instances but could lead to performance fluctuations in the production environment, which may not be acceptable.

Option D (DEV with On-Demand Instances and PROD with Scheduled Reserved Instances) uses On-Demand instances for DEV and Scheduled Reserved Instances for PROD, which could lead to higher costs for the production environment, as Reserved Instances are generally more cost-effective than Scheduled Reserved Instances.

45
Q

A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones. The instances host applications that use a hierarchical directory structure. The applications need to read and write rapidly and concurrently to shared storage.
What should a solutions architect do to meet these requirements?

A. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2 instance.

B. Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.

C. Create a file system on a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume. Attach the EBS volume to all the EC2 instances.

D. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. Synchronize the EBS volumes across the different EC2 instances.

A

Option B (Creating an Amazon S3 bucket) is an object storage service, which might not be the best fit for applications that require file-level access with hierarchical directory structures. S3 is great for storing objects but doesn’t offer native file system semantics.

Option C (Creating an EBS volume) would provide block-level storage, but it doesn’t inherently offer concurrent access for multiple instances. It would require additional setup and management to achieve concurrent access, which makes it less suitable for this use case.

Option D (Creating EBS volumes on each instance and synchronizing them) would require manual synchronization and could lead to data consistency issues and complexity, especially when dealing with multiple instances.

46
Q

A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 hour before the desired Amazon EC2 capacity is reached. The peak capacity is the same every night and the batch jobs always start at 1 AM. The solutions architect needs to find a cost-effective solution that will allow for the desired EC2 capacity to be reached quickly and allow the Auto Scaling group to scale down after the batch jobs are complete.
What should the solutions architect do to meet these requirements?

A. Increase the minimum capacity for the Auto Scaling group.

B. Increase the maximum capacity for the Auto Scaling group.

C. Configure scheduled scaling to scale up to the desired compute level.

D. Change the scaling policy to add more EC2 instances during each scaling operation.

A

Option A (Increasing the minimum capacity) may work, but it means the instances would always be running at a higher capacity, potentially incurring unnecessary costs during non-batch processing hours.

Option B (Increasing the maximum capacity) would allow the Auto Scaling group to launch more instances if needed, but it wouldn’t automatically scale down after the batch job, potentially leading to overprovisioning.

Option D (Changing the scaling policy to add more instances during each scaling operation) is not as suitable for this scenario because it doesn’t address the specific time-based requirement of the nightly batch job. Scheduled scaling allows you to precisely control when scaling actions occur.

47
Q

A Solutions Architect must design a web application that will be hosted on AWS, allowing users to purchase access to premium, shared content that is stored in an
S3 bucket. Upon payment, content will be available for download for 14 days before the user is denied access.
Which of the following would be the LEAST complicated implementation?

A. Use an Amazon CloudFront distribution with an origin access identity (OAI). Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design a Lambda function to remove data that is older than 14 days.

B. Use an S3 bucket and provide direct access to the file. Design the application to track purchases in a DynamoDB table. Configure a Lambda function to remove data that is older than 14 days based on a query to Amazon DynamoDB.

C. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to set an expiration of 14 days for the URL.

D. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to set an expiration of 60 minutes for the URL and recreate the URL as necessary.

A

Option A, while effective, adds complexity with a Lambda function for data removal, which may require additional management and could lead to inconsistencies if not configured correctly.

Option B involves direct access to S3 and requires tracking purchases in DynamoDB, which adds complexity to the application logic and management of purchase records.

Option D, while using CloudFront with signed URLs, sets a shorter expiration time (60 minutes) and requires the application to recreate URLs as necessary. This approach is more complex to manage, as you’d need to handle URL recreation and potentially authentication more frequently.

48
Q

A solutions architect is designing a mission-critical web application. It will consist of Amazon EC2 instances behind an Application Load Balancer and a relational database. The database should be highly available and fault tolerant.
Which database implementations will meet these requirements? (Choose two.)

A. Amazon Redshift

B. Amazon DynamoDB

C. Amazon RDS for MySQL

D. MySQL-compatible Amazon Aurora Multi-AZ

E. Amazon RDS for SQL Server Standard Edition Multi-AZ

A

Amazon Redshift (Option A) is a data warehousing solution, not a traditional relational database, and is typically used for analytical workloads rather than transactional web applications.

Amazon DynamoDB (Option B) is a highly available and managed NoSQL database service, it’s not a relational database, so it may not be suitable for all use cases.

Amazon RDS for MySQL (Option C) provides high availability, but the question specifies that the database should be “highly available and fault tolerant,” which is better achieved with Multi-AZ configurations provided by options D and E.

49
Q

A company’s web application is running on Amazon EC2 instances behind an Application Load Balancer. The company recently changed its policy, which now requires the application to be accessed from one specific country only.
Which configuration will meet this requirement?

A. Configure the security group for the EC2 instances.

B. Configure the security group on the Application Load Balancer.

C. Configure AWS WAF on the Application Load Balancer in a VPC.

D. Configure the network ACL for the subnet that contains the EC2 instances.

A

Option A (Configuring the security group for the EC2 instances) primarily controls inbound and outbound traffic at the instance level and doesn’t provide geolocation-based access control.

Option B (Configuring the security group on the Application Load Balancer) deals with controlling traffic to and from the load balancer, but it doesn’t provide country-specific access control.

Option D (Configuring the network ACL for the subnet that contains the EC2 instances) is used to control traffic at the subnet level, but it doesn’t offer the granularity needed to restrict access to a specific country.

50
Q

A solutions architect has created two IAM policies: Policy1 and Policy2. Both policies are attached to an IAM group.
Policy1
{
“Version”: “2012-10-17”, “Statement”: [
{
“Effect”: “Allow”,
“Action”: [
“iam:Get”,
“iam:List
”,
“kms:List”,
“ec2:
”,
“ds:”,
“logs:Get
”,
“logs:Describe
],
“Resource”: “

}
]
}

Policy2
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Deny”,
“Action”: “ds:Delete”,
“Resource”: “
”,
}
]
}

A cloud engineer is added as an IAM user to the IAM group. Which action will the cloud engineer be able to perform?

A. Deleting IAM users

B. Deleting directories

C. Deleting Amazon EC2 instances

D. Deleting logs from Amazon CloudWatch Logs

A

A. Deleting IAM users: Policy1 allows “iam:Get” and “iam:List” actions, which cover listing and fetching IAM users and their information. However, it doesn’t include “iam:Delete*” actions, so deleting IAM users is not permitted by Policy1. Therefore, this option is less suitable.

B. Deleting directories: Policy2 explicitly denies “ds:Delete*” actions on AWS Directory Service resources. Deleting directories falls under this category, and the denial is enforced by Policy2. This option is less suitable because Policy2 prevents such actions.

D. Deleting logs from Amazon CloudWatch Logs: Policy1 allows “logs:Get” and “logs:Describe” actions, which cover fetching and describing logs in Amazon CloudWatch Logs. However, it doesn’t include “logs:Delete*” actions, so deleting logs is not permitted by Policy1. Therefore, this option is less suitable.