Q051-100 Flashcards
A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon Elastic Block Store
(Amazon EBS) volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?
A. Copy the data so both EBS volumes contain all the documents.
B. Configure the Application Load Balancer to direct a user to the server with the documents.
C. Copy the data from both EBS volumes to Amazon Elastic File System (Amazon EFS). Modify the application to save new documents to Amazon Elastic File System (Amazon EFS).
D. Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server.
A. Copying the data so that both EBS volumes contain all the documents would lead to data duplication and synchronization challenges. It doesn’t address the issue of load balancing and ensuring users can see all their documents seamlessly.
B. Configuring the Application Load Balancer to direct a user to a specific server doesn’t solve the problem, as it would still result in users seeing only a subset of their documents depending on which server they are directed to.
Option D, while feasible, is more complex to implement and maintain. You would need to ensure that both servers are kept in sync with the latest document changes, which can introduce synchronization challenges.
A company is planning to use Amazon S3 to store images uploaded by its users. The images must be encrypted at rest in Amazon S3. The company does not want to spend time managing and rotating the keys, but it does want to control who can access those keys.
What should a solutions architect use to accomplish this?
A. Server-Side Encryption with keys stored in an S3 bucket
B. Server-Side Encryption with Customer-Provided Keys (SSE-C)
C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
D. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
A. Server-Side Encryption with keys stored in an S3 bucket would require you to manage and rotate the keys yourself, which is not what you want to do. Additionally, it may not provide the same level of control over access to the keys as AWS KMS provides.
B. Server-Side Encryption with Customer-Provided Keys (SSE-C) allows you to provide your own encryption keys, but you are responsible for managing and rotating these keys, which is something you want to avoid.
C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) allows Amazon S3 to manage the encryption keys, but it doesn’t provide you with control over access to the keys, which is a requirement in this case.
A company is running an ecommerce application on Amazon EC2. The application consists of a stateless web tier that requires a minimum of 10 instances, and a peak of 250 instances to support the application’s usage. The application requires 50 instances 80% of the time.
Which solution should be used to minimize costs?
A. Purchase Reserved Instances to cover 250 instances.
B. Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances.
C. Purchase On-Demand Instances to cover 40 instances. Use Spot Instances to cover the remaining instances.
D. Purchase Reserved Instances to cover 50 instances. Use On-Demand and Spot Instances to cover the remaining instances.
A. Purchasing Reserved Instances to cover the maximum peak of 250 instances would be costly and may lead to underutilized Reserved Instances during non-peak periods, resulting in unnecessary expenses.
B. Purchasing Reserved Instances to cover 80 instances would leave you underprovisioned during peak times when you require 250 instances, and using Spot Instances to cover the remaining instances during peak times could be less cost-effective.
C. Purchasing On-Demand Instances to cover 40 instances and using Spot Instances for the remaining instances might not provide enough capacity during peak times when you need 250 instances.
A company has deployed an API in a VPC behind an internet-facing Application Load Balancer (ALB). An application that consumes the API as a client is deployed in a second account in private subnets behind a NAT gateway. When requests to the client application increase, the NAT gateway costs are higher than expected. A solutions architect has configured the ALB to be internal.
Which combination of architectural changes will reduce the NAT gateway costs? (Choose two.)
A. Configure a VPC peering connection between the two VPCs. Access the API using the private address.
B. Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address.
C. Configure a ClassicLink connection for the API into the client VPC. Access the API using the ClassicLink address.
D. Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address.
E. Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address.
Option B (Configure an AWS Direct Connect connection between the two VPCs) is not typically used to reduce NAT gateway costs and can be a more complex and expensive solution compared to VPC peering or PrivateLink in this context.
Option C (Configure a ClassicLink connection for the API into the client VPC) is used for linking EC2-Classic instances to a Virtual Private Cloud (VPC) and is not relevant to this scenario.
Option E (Configure an AWS Resource Access Manager connection between the two accounts) doesn’t directly address the NAT gateway cost issue and is not a typical solution for reducing NAT gateway costs.
A solutions architect is tasked with transferring 750 TB of data from an on-premises network-attached file system located at a branch office Amazon S3 Glacier.
The migration must not saturate the on-premises 1 Mbps internet connection.
Which solution will meet these requirements?
A. Create an AWS site-to-site VPN tunnel to an Amazon S3 bucket and transfer the files directly. Transfer the files directly by using the AWS CLI.
B. Order 10 AWS Snowball Edge Storage Optimized devices, and select an S3 Glacier vault as the destination.
C. Mount the network-attached file system to an S3 bucket, and copy the files directly. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.
D. Order 10 AWS Snowball Edge Storage Optimized devices, and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.
A. Creating an AWS site-to-site VPN tunnel and transferring the files directly to Amazon S3 may not be feasible given the limited 1 Mbps internet connection. It could saturate the connection and take a significant amount of time to complete the transfer.
B. AWS Snowball Edge Storage Optimized devices are designed for offline data transfer in situations where high data volumes need to be moved. Using Snowball Edge devices allows you to physically transfer the data to AWS without saturating your internet connection. The data is then uploaded to your specified Amazon S3 bucket. Transitioning the data to Amazon S3 Glacier through a lifecycle policy is also a valid approach for cost optimization.
C. While mounting the network-attached file system to an S3 bucket and copying the files directly to S3 is a possible approach, it might not be suitable for this scenario if it leads to saturating the 1 Mbps internet connection for an extended period.
A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and a database runs on the private subnet. The web application instances and the database are running in a single Availability Zone (AZ).
Which combination of steps should a solutions architect take to provide high availability for this architecture? (Choose two.)
A. Create new public and private subnets in the same AZ for high availability.
B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs.
C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer.
D. Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ.
E. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment.
A. Creating new public and private subnets in the same AZ for high availability does not provide fault tolerance because both subnets are still within the same AZ. High availability typically involves distributing resources across different AZs.
C. Adding the existing web application instances to an Auto Scaling group behind an Application Load Balancer without spanning multiple AZs does not fully address high availability. You need to ensure that instances are spread across multiple AZs to achieve better fault tolerance.
D. Creating new public and private subnets in a new AZ and creating a database using Amazon EC2 in one AZ does not provide high availability for the database. It’s essential to utilize Amazon RDS with multi-AZ deployment or another highly available database configuration to ensure database availability across different AZs.
A solutions architect is implementing a document review application using an Amazon S3 bucket for storage. The solution must prevent an accidental deletion of the documents and ensure that all versions of the documents are available. Users must be able to download, modify, and upload documents.
Which combination of actions should be taken to meet these requirements? (Choose two.)
A. Enable a read-only bucket ACL.
B. Enable versioning on the bucket.
C. Attach an IAM policy to the bucket.
D. Enable MFA Delete on the bucket.
E. Encrypt the bucket using AWS KMS.
A. Enabling a read-only bucket ACL restricts all write actions to the bucket, which would not allow users to modify or upload documents. This is not suitable for the stated requirements.
D. Enabling MFA Delete is a security measure that requires multi-factor authentication (MFA) for object deletions. It adds an extra layer of protection for deletions but may not be necessary to meet the requirements.
E. Encrypting the bucket using AWS KMS is a security measure to protect data at rest. While it’s a good practice for security, it is not directly related to preventing accidental deletions or ensuring document versions are available.
An application hosted on AWS is experiencing performance problems, and the application vendor wants to perform an analysis of the log file to troubleshoot further. The log file is stored on Amazon S3 and is 10 GB in size. The application owner will make the log file available to the vendor for a limited time.
What is the MOST secure way to do this?
A. Enable public read on the S3 object and provide the link to the vendor.
B. Upload the file to Amazon WorkDocs and share the public link with the vendor.
C. Generate a presigned URL and have the vendor download the log file before it expires.
D. Create an IAM user for the vendor to provide access to the S3 bucket and the application. Enforce multi-factor authentication.
A. Enabling public read on the S3 object and providing the link to the vendor is not secure as it makes the log file accessible to anyone with the link, which can lead to unauthorized access.
B. Uploading the file to Amazon WorkDocs and sharing the public link with the vendor might be a convenient option, but it may not provide the same level of control and security as a presigned URL. Also, WorkDocs may not be the most suitable service for this specific use case.
D. Creating an IAM user for the vendor with access to the S3 bucket is generally secure, but enforcing multi-factor authentication (MFA) could be complex and might not be practical for the vendor. Additionally, it’s often preferable to avoid creating IAM users for external parties when temporary access is needed because managing such accounts can be challenging.
A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the company.
How should security groups be configured in this situation? (Choose two.)
A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0.
B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0.
C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.
D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier.
E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.
B. Configuring the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0 is not typically necessary for the web tier unless your application specifically requires outbound HTTPS communication to any destination, which is less common.
D. Configuring the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier is not a typical configuration and may not be needed for the described architecture. In most cases, outbound traffic from the database tier to the web tier is not necessary for basic web application functionality and should be restricted based on specific use cases.
E. Configuring the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier may introduce unnecessary and potentially insecure access to the database tier over port 443. It’s recommended to allow only necessary and specific ports for communication.
A company allows its developers to attach existing IAM policies to existing IAM roles to enable faster experimentation and agility. However, the security operations team is concerned that the developers could attach the existing administrator policy, which would allow the developers to circumvent any other security policies.
How should a solutions architect address this issue?
A. Create an Amazon SNS topic to send an alert every time a developer creates a new policy.
B. Use service control policies to disable IAM activity across all accounts in the organizational unit.
C. Prevent the developers from attaching any policies and assign all IAM duties to the security operations team.
D. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy.
A. Creating an Amazon SNS topic to send an alert every time a developer creates a new policy can be helpful for monitoring and auditing, but it doesn’t prevent the action from happening in the first place.
B. Using service control policies (SCPs) to disable IAM activity across all accounts in the organizational unit would be an overly restrictive measure and would likely hinder legitimate IAM activities.
Option C, preventing developers from attaching any policies and assigning all IAM duties to the security operations team, is not practical for agility and experimentation, and it would create a bottleneck for IAM tasks.
A company has a multi-tier application that runs six front-end web servers in an Amazon EC2 Auto Scaling group in a single Availability Zone behind an
Application Load Balancer (ALB). A solutions architect needs to modify the infrastructure to be highly available without modifying the application.
Which architecture should the solutions architect choose that provides high availability?
A. Create an Auto Scaling group that uses three instances across each of two Regions.
B. Modify the Auto Scaling group to use three instances across each of two Availability Zones.
C. Create an Auto Scaling template that can be used to quickly create more instances in another Region.
D. Change the ALB in front of the Amazon EC2 instances in a round-robin configuration to balance traffic to the web tier.
Option A (creating an Auto Scaling group across two Regions) would provide geographic redundancy but may introduce latency and complexity, and it could involve modifying the application to handle cross-region traffic.
Option C (creating an Auto Scaling template for another Region) is not necessary for addressing high availability within a single Region, and it doesn’t provide the same level of redundancy as using multiple Availability Zones.
Option D (changing the ALB to use round-robin) would distribute traffic evenly but wouldn’t address the high availability requirement effectively. Using multiple Availability Zones is a more robust solution for ensuring availability.
A company runs an application on a group of Amazon Linux EC2 instances. For compliance reasons, the company must retain all application log files for 7 years.
The log files will be analyzed by a reporting tool that must access all files concurrently.
Which storage solution meets these requirements MOST cost-effectively?
A. Amazon Elastic Block Store (Amazon EBS)
B. Amazon Elastic File System (Amazon EFS)
C. Amazon EC2 instance store
D. Amazon S3
Option B (Amazon EFS) might be considered, but it is typically more expensive for long-term storage compared to Amazon S3, and it may not provide the same durability and cost benefits.
Options A (Amazon EBS) and C (Amazon EC2 instance store) are not suitable for long-term storage of log files because they do not offer the durability, scalability, and cost-effectiveness of Amazon S3.
A media streaming company collects real-time data and stores it in a disk-optimized database system. The company is not getting the expected throughput and wants an in-memory database storage solution that performs faster and provides high availability using data replication.
Which database should a solutions architect recommend?
A. Amazon RDS for MySQL
B. Amazon RDS for PostgreSQL.
C. Amazon ElastiCache for Redis
D. Amazon ElastiCache for Memcached
Options A (Amazon RDS for MySQL) and B (Amazon RDS for PostgreSQL) are relational database services and may not offer the same in-memory performance benefits as Redis for certain workloads. They are disk-optimized databases and not specifically designed for high-speed in-memory operations.
Option D (Amazon ElastiCache for Memcached) is another in-memory caching service, but it lacks some advanced features like data persistence and complex data structures offered by Redis.
A company hosts its product information webpages on AWS. The existing solution uses multiple Amazon C2 instances behind an Application Load Balancer in an
Auto Scaling group. The website also uses a custom DNS name and communicates with HTTPS only using a dedicated SSL certificate. The company is planning a new product launch and wants to be sure that users from around the world have the best possible experience on the new website.
What should a solutions architect do to meet these requirements?
A. Redesign the application to use Amazon CloudFront.
B. Redesign the application to use AWS Elastic Beanstalk.
C. Redesign the application to use a Network Load Balancer.
D. Redesign the application to use Amazon S3 static website hosting.
Option B (AWS Elastic Beanstalk) is a Platform as a Service (PaaS) offering that simplifies the deployment and management of web applications. While it can be used for web hosting, it doesn’t provide the global content delivery capabilities of CloudFront.
Option C (Network Load Balancer) is used for distributing incoming network traffic across targets within a VPC. It does not provide the content caching and global distribution capabilities needed to improve user experience worldwide.
Option D (Amazon S3 static website hosting) is suitable for hosting static content like HTML, CSS, and images but may not be the best choice for dynamic web applications that require backend processing.
A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while adding and removing application nodes as needed based on the number of jobs to be processed. The processor application is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored.
Which design should the solutions architect use?
A. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage.
B. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage.
C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.
D. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic.
Option A (using SNS topics) is not ideal for job processing, as SNS is designed for pub/sub messaging and doesn’t provide the durability and job processing capabilities of SQS.
Option B (scaling based on network usage) is not typically used for job processing scaling, and network usage may not directly correlate with the number of jobs to be processed.
Option D (using SNS topics) is similar to option A and does not provide the durability and job processing capabilities needed for this use case.
A marketing company is storing CSV files in an Amazon S3 bucket for statistical analysis. An application on an Amazon EC2 instance needs permission to efficiently process the CSV data stored in the S3 bucket.
Which action will MOST securely grant the EC2 instance access to the S3 bucket?
A. Attach a resource-based policy to the S3 bucket.
B. Create an IAM user for the application with specific permissions to the S3 bucket.
C. Associate an IAM role with least privilege permissions to the EC2 instance profile.
D. Store AWS credentials directly on the EC2 instance for applications on the instance to use for API calls.
A. Attaching a resource-based policy to the S3 bucket can grant broad access to anyone or any resource with the correct credentials. It may not provide the fine-grained control and least privilege principles that IAM roles offer.
B. Creating an IAM user for the application would require the storage of AWS access keys on the EC2 instance, which can pose a security risk if those credentials are compromised. IAM roles with temporary security credentials are a more secure option.
D. Storing AWS credentials directly on the EC2 instance can be risky, as it increases the likelihood of unauthorized access to those credentials if the instance is compromised. IAM roles provide a more secure and manageable way to grant access to AWS resources for EC2 instances.
A company has on-premises servers that run a relational database. The database serves high-read traffic for users in different locations. The company wants to migrate the database to AWS with the least amount of effort. The database solution must support high availability and must not affect the company’s current traffic flow.
Which solution meets these requirements?
A. Use a database in Amazon RDS with Multi-AZ and at least one read replica.
B. Use a database in Amazon RDS with Multi-AZ and at least one standby replica.
C. Use databases that are hosted on multiple Amazon EC2 instances in different AWS Regions.
D. Use databases that are hosted on Amazon EC2 instances behind an Application Load Balancer in different Availability Zones.
Option B (using Amazon RDS with Multi-AZ and at least one standby replica) is similar to option A, but the term “standby replica” is not typically used with Amazon RDS. Multi-AZ itself provides a standby instance.
Option C (hosting databases on multiple Amazon EC2 instances in different AWS Regions) can be complex and may introduce additional latency for users in different locations. It’s also not clear how it would seamlessly handle the migration without affecting traffic flow.
Option D (hosting databases on Amazon EC2 instances behind an Application Load Balancer in different Availability Zones) requires manual management of database instances and high availability solutions, which may not be as easy to set up and maintain as Amazon RDS with Multi-AZ.
A company’s application is running on Amazon EC2 instances within an Auto Scaling group behind an Elastic Load Balancer. Based on the application’s history, the company anticipates a spike in traffic during a holiday each year. A solutions architect must design a strategy to ensure that the Auto Scaling group proactively increases capacity to minimize any performance impact on application users.
Which solution will meet these requirements?
A. Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds 90%.
B. Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand.
C. Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak demand period.
D. Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when there are autoscaling:EC2_INSTANCE_LAUNCH events.
Option A (creating a CloudWatch alarm based on CPU utilization) is reactive and may lead to delayed scaling since it relies on threshold breaches. It may not provide the required proactive capacity increase.
Option C (increasing the minimum and maximum number of EC2 instances) could work but would require manual intervention to adjust the group’s capacity, which may not be as efficient as using a scheduled action.
Option D (configuring an Amazon SNS notification for EC2_INSTANCE_LAUNCH events) is related to monitoring and notification and does not provide the proactive capacity scaling required to handle the anticipated traffic spike.
A company hosts an application on multiple Amazon EC2 instances. The application processes messages from an Amazon SQS queue, writes for an Amazon
RDS table, and deletes -
the message from the queue. Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?
A. Use the CreateQueue API call to create a new queue.
B. Use the AddPermission API call to add appropriate permissions.
C. Use the ReceiveMessage API call to set an appropriate wait time.
D. Use the ChangeMessageVisibility API call to increase the visibility timeout.
A. Using the CreateQueue API call to create a new queue would not address the issue of message duplication. It creates a new queue but doesn’t prevent duplicates.
B. Using the AddPermission API call is related to granting or managing permissions for different AWS accounts to access the queue and does not address message duplication.
C. Using the ReceiveMessage API call to set an appropriate wait time determines how long the ReceiveMessage call should wait for messages to become available in the queue but does not directly address message duplication.
An Amazon EC2 administrator created the following policy associated with an IAM group containing several users:
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: “ec2:TerminateInstances”,
“Resource”: “”,
“Condition”: {
“IpAddress”: {
“aws:SourceIp”: “10.100.100.0/24”
}
}
},
{
“Effect”: “Deny”,
“Action”: “ec2:”,
“Resource”: “*”,
“Condition”: {
“StringNotEquals”: {
“ec2:Region”: “us-east-1”
}
}
}
]
}
What is the effect of this policy?
A. Users can terminate an EC2 instance in any AWS Region except us-east-1.
B. Users can terminate an EC2 instance with the IP address 10.100.100.1 in the us-east-1 Region.
C. Users can terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.100.100.254.
D. Users cannot terminate an EC2 instance in the us-east-1 Region when the user’s source IP is 10.100.100.254.
The first statement allows users to terminate EC2 instances only when their source IP is within the range “10.100.100.0/24”.
The second statement denies all EC2 actions in regions other than “us-east-1”.
So, users with IPs in the range “10.100.100.0/24” are allowed to terminate instances, but only in the “us-east-1” region. The source IP “10.100.100.254” falls within the range and satisfies both conditions, allowing termination of instances in the “us-east-1” region.