Q101-150 Flashcards
A company has an Amazon EC2 instance running on a private subnet that needs to access a public website to download patches and updates. The company does not want external websites to see the EC2 instance IP address or initiate connections to it.
How can a solutions architect achieve this objective?
A. Create a site-to-site VPN connection between the private subnet and the network in which the public site is deployed.
B. Create a NAT gateway in a public subnet. Route outbound traffic from the private subnet through the NAT gateway.
C. Create a network ACL for the private subnet where the EC2 instance deployed only allows access from the IP address range of the public website.
D. Create a security group that only allows connections from the IP address range of the public website. Attach the security group to the EC2 instance.
A. Creating a site-to-site VPN connection would establish a secure connection between the private subnet and the network where the public website is deployed. While this would provide secure communication, it doesn’t hide the EC2 instance’s IP address or prevent incoming connections from external websites.
C. Creating a network ACL (Access Control List) that allows access only from the IP address range of the public website is a partial solution. While it restricts incoming traffic to your EC2 instance, it does not hide the EC2 instance’s IP address or control outbound connections.
D. Creating a security group that only allows connections from the IP address range of the public website would also restrict incoming traffic, but it does not address the requirement to hide the EC2 instance’s IP address or control outbound connections.
A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days. The company’s network bandwidth is limited to 15 Mbps and cannot exceed 70% utilization. What should a solutions architect do to meet these requirements?
A. Use AWS Snowball.
B. Use AWS DataSync.
C. Use a secure VPN connection.
D. Use Amazon S3 Transfer Acceleration.
B. Use AWS DataSync:
AWS DataSync is a service designed for transferring data between on-premises storage and Amazon S3 or Amazon EFS. While it can be used for data transfer, it may not be the best choice for transferring 20 TB of data within a 30-day timeframe, given the limited network bandwidth of 15 Mbps. AWS DataSync relies on the available network bandwidth, and with the given constraints, it would likely take significantly longer than 30 days to complete the transfer.
C. Use a secure VPN connection:
A secure VPN connection is used to establish a secure, encrypted connection between your on-premises network and your AWS environment. While it can be useful for secure communication, it doesn’t address the limitations of the available network bandwidth (15 Mbps). The data transfer rate would still be limited by the available bandwidth, making it impractical to transfer 20 TB of data within 30 days in this scenario.
D. Use Amazon S3 Transfer Acceleration:
Amazon S3 Transfer Acceleration is a feature of Amazon S3 that is designed to speed up uploads to Amazon S3 by leveraging Amazon CloudFront’s globally distributed edge locations. While it can help improve upload speed, it doesn’t address the fundamental limitation of the available network bandwidth (15 Mbps). It can be useful for improving performance when transferring data over the internet, but it may not be sufficient to meet the 20 TB data transfer requirement within the specified time frame, especially if the network bandwidth is a bottleneck.
A company has a website running on Amazon EC2 instances across two Availability Zones. The company is expecting spikes in traffic on specific holidays, and wants to provide a consistent user experience. How can a solutions architect meet this requirement?
A. Use step scaling.
B. Use simple scaling.
C. Use lifecycle hooks.
D. Use scheduled scaling.
A. Step scaling is a dynamic scaling method that adjusts the capacity of your Auto Scaling group based on a set of policies and metrics. It is typically used for responding to changes in traffic patterns that are not predictable, such as sudden increases or decreases in load. It may not be the best choice for handling planned, predictable spikes in traffic on specific holidays.
B. Simple scaling, similar to step scaling, is used for dynamic scaling based on specified policies and metrics. It is not specifically designed for handling scheduled, predictable traffic spikes.
C. Lifecycle hooks are used to perform custom actions when instances are launched or terminated within an Auto Scaling group. While they can be helpful for customizing instance launches and terminations, they are not directly related to addressing the requirement of handling traffic spikes on specific holidays.
An ecommerce company is running a multi-tier application on AWS. The front-end and backend tiers both run on Amazon EC2, and the database runs on Amazon
RDS for MySQL. The backend tier communicates with the RDS instance. There are frequent calls to return identical datasets from the database that are causing performance slowdowns.
Which action should be taken to improve the performance of the backend?
A. Implement Amazon SNS to store the database calls.
B. Implement Amazon ElastiCache to cache the large datasets.
C. Implement an RDS for MySQL read replica to cache database calls.
D. Implement Amazon Kinesis Data Firehose to stream the calls to the database.
A. Implementing Amazon SNS (Simple Notification Service) is used for communication between distributed components and for sending notifications. It is not designed for caching data or improving the performance of database calls.
C. Implementing an RDS for MySQL read replica can help distribute read traffic and improve read scalability, but it doesn’t specifically address the issue of caching identical datasets to reduce database calls.
D. Amazon Kinesis Data Firehose is used for streaming data to other AWS services, such as data lakes and data analytics. It’s not designed for caching or improving the performance of database calls.
A company has an on-premises data center that is running out of storage capacity. The company wants to migrate its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow for immediate retrieval of data at no additional cost. How can these requirements be met?
A. Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the workload.
B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally.
C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
D. Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
A. Deploying Amazon S3 Glacier Vault with expedited retrieval and provisioned retrieval capacity may provide fast access to archived data but is not suitable for immediate retrieval of all data, and it could incur additional costs.
C. Deploying AWS Storage Gateway with stored volumes is primarily designed for data backup and recovery, not for immediate retrieval of data without additional cost.
D. Deploying AWS Direct Connect would establish a dedicated network connection to AWS but doesn’t directly address the storage and data retrieval requirements.
A company is processing data on a daily basis. The results of the operations are stored in an Amazon S3 bucket, analyzed daily for one week, and then must remain immediately accessible for occasional analysis.
What is the MOST cost-effective storage solution alternative to the current configuration?
A. Configure a lifecycle policy to delete the objects after 30 days.
B. Configure a lifecycle policy to transition the objects to Amazon S3 Glacier after 30 days.
C. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
D. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
A. Configuring a lifecycle policy to delete the objects after 30 days would not meet the requirement of keeping the data immediately accessible for occasional analysis.
B. Configuring a lifecycle policy to transition the objects to Amazon S3 Glacier after 30 days would lower storage costs but would not keep the data immediately accessible without additional retrieval time and cost.
D. Configuring a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days might offer slightly lower storage costs compared to S3 Standard-IA, but it comes with the risk of data loss in the event of a data center failure in the availability zone. Since the requirement is to keep the data immediately accessible, S3 One Zone-IA may not be the best choice.
A company delivers files in Amazon S3 to certain users who do not have AWS credentials. These users must be given access for a limited time. What should a solutions architect do to securely meet these requirements?
A. Enable public access on an Amazon S3 bucket.
B. Generate a presigned URL to share with the users.
C. Encrypt files using AWS KMS and provide keys to the users.
D. Create and assign IAM roles that will grant GetObject permissions to the users.
A. Enabling public access on an Amazon S3 bucket is not a secure solution, as it would make the files in the bucket accessible to anyone on the internet, potentially exposing sensitive data.
C. Encrypting files using AWS KMS and providing keys to users is a valid approach for securing data at rest, but it doesn’t address the requirement of granting temporary access for a limited time. It’s more suitable for securing data within the AWS environment.
D. Creating and assigning IAM roles with GetObject permissions is meant for granting access to AWS resources to users within the AWS environment who have AWS credentials. It’s not designed for providing temporary, external access to files in an S3 bucket for users without AWS credentials.
A company wants to run a hybrid workload for data processing. The data needs to be accessed by on-premises applications for local data processing using an
NFS protocol, and must also be accessible from the AWS Cloud for further analytics and batch processing.
Which solution will meet these requirements?
A. Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud.
B. Use an AWS Storage Gateway tape gateway to copy the backup of the local data to AWS, then perform analytics on this data in the AWS cloud.
C. Use an AWS Storage Gateway volume gateway in a stored volume configuration to regularly take snapshots of the local data, then copy the data to AWS.
D. Use an AWS Storage Gateway volume gateway in a cached volume configuration to back up all the local storage in the AWS cloud, then perform analytics on this data in the cloud.
B. Using an AWS Storage Gateway tape gateway is primarily designed for long-term archiving and backup, not for providing immediate access to data for local data processing or analytics.
C. Using an AWS Storage Gateway volume gateway in a stored volume configuration is suitable for data backup and recovery but does not provide real-time access for local data processing and analytics.
D. Using an AWS Storage Gateway volume gateway in a cached volume configuration is designed for primary storage and backup of data in the AWS Cloud. While it can back up data from on-premises storage, it is not the best solution for real-time access and processing of data on-premises.
A company plans to store sensitive user data on Amazon S3. Internal security compliance requirement mandate encryption of data before sending it to Amazon S3.
What should a solutions architect recommend to satisfy these requirements?
A. Server-side encryption with customer-provided encryption keys
B. Client-side encryption with Amazon S3 managed encryption keys
C. Server-side encryption with keys stored in AWS key Management Service (AWS KMS)
D. Client-side encryption with a master key stored in AWS Key Management Service (AWS KMS)
A. Server-side encryption with customer-provided encryption keys (SSE-C) involves providing your own encryption keys, which means you need to manage those keys yourself. While this option is valid, it might be more complex to implement and manage for compliance purposes.
B. Client-side encryption with Amazon S3 managed encryption keys (SSE-S3) involves managing encryption on the client side but uses S3’s own managed keys for encryption. It does not allow you to use your own keys and might not fully meet the compliance requirements.
C. Server-side encryption with keys stored in AWS KMS (SSE-KMS) is a good option for server-side encryption, but it may not satisfy the requirement to encrypt data before sending it to S3. SSE-KMS is typically used for encrypting data at rest within S3, not for encrypting data before it’s uploaded.
A solutions architect is moving the static content from a public website hosted on Amazon EC2 instances to an Amazon S3 bucket. An Amazon CloudFront distribution will be used to deliver the static assets. The security group used by the EC2 instances restricts access to a limited set of IP ranges. Access to the static content should be similarly restricted.
Which combination of steps will meet these requirements? (Choose two.)
A. Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects.
B. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution.
C. Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the CloudFront distribution.
D. Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the S3 bucket hosting the static content.
E. Create a new IAM role and associate the role with the distribution. Change the permissions either on the S3 bucket or on the files within the S3 bucket so that only the newly created IAM role has read and download permissions.
C. Creating a new security group and associating it with CloudFront is not a valid approach because security groups are used for EC2 instances, not CloudFront distributions.
D. Creating a new security group and associating it with the S3 bucket is also not a valid approach because security groups are not used to restrict access to S3 buckets directly. They are used for EC2 instances.
E. Creating a new IAM role and changing permissions on the S3 bucket or files within the bucket is not necessary for achieving IP-based restrictions. IAM roles are used for granting permissions to AWS services, not for IP filtering.
A company is investigating potential solutions that would collect, process, and store users’ service usage data. The business objective is to create an analytics capability that will enable the company to gather operational insights quickly using standard SQL queries. The solution should be highly available and ensure
Atomicity, Consistency, Isolation, and Durability (ACID) compliance in the data tier.
Which solution should a solutions architect recommend?
A. Use an Amazon Timestream database.
B. Use an Amazon Neptune database in a Multi-AZ design.
C. Use a fully managed Amazon RDS for MySQL database in a Multi-AZ design.
D. Deploy PostgreSQL on an Amazon EC2 instance that uses Amazon Elastic Block Store (Amazon EBS) Throughput Optimized HDD (st1) storage.
A. Amazon Timestream is a purpose-built, serverless time-series database for managing and querying time-series data. While it’s excellent for time-series data, it may not be the best fit for standard SQL queries and traditional relational database requirements such as ACID compliance.
B. Amazon Neptune is a graph database and may not be the ideal choice for handling service usage data and standard SQL queries with ACID compliance. Multi-AZ is more about high availability and not necessarily about ACID compliance.
D. Deploying PostgreSQL on an Amazon EC2 instance using Amazon EBS storage gives you flexibility but involves more operational overhead in terms of management and scaling. It doesn’t provide the same level of managed service as Amazon RDS, and achieving ACID compliance may require more effort.
A company recently launched its website to serve content to its global user base. The company wants to store and accelerate the delivery of static content to its users by leveraging Amazon CloudFront with an Amazon EC2 instance attached as its origin.
How should a solutions architect optimize high availability for the application?
A. Use Lambda@Edge for CloudFront.
B. Use Amazon S3 Transfer Acceleration for CloudFront.
C. Configure another EC2 instance in a different Availability Zone as part of the origin group.
D. Configure another EC2 instance as part of the origin server cluster in the same Availability Zone.
A. Using Lambda@Edge for CloudFront is primarily for running serverless functions at the edge locations for request/response customization and does not directly address high availability for the origin servers.
B. Amazon S3 Transfer Acceleration is a feature of Amazon S3 and not directly related to optimizing high availability for the application’s EC2 instances.
D. Configuring another EC2 instance as part of the origin server cluster in the same Availability Zone might help with load balancing, but it wouldn’t provide the same level of high availability as using multiple Availability Zones. If the entire Availability Zone goes down, the application would be impacted.
An application running on an Amazon EC2 instance in VPC-A needs to access files in another EC2 instance in VPC-B. Both are in separate AWS accounts. The network administrator needs to design a solution to configure secure access to EC2 instance in VPC-B from VPC-A. The connectivity should not have a single point of failure or bandwidth concerns.
Which solution will meet these requirements?
A. Set up a VPC peering connection between VPC-A and VPC-B.
B. Set up VPC gateway endpoints for the EC2 instance running in VPC-B.
C. Attach a virtual private gateway to VPC-B and set up routing from VPC-A.
D. Create a private virtual interface (VIF) for the EC2 instance running in VPC-B and add appropriate routes from VPC-A.
A. Setting up a VPC peering connection between VPC-A and VPC-B can enable communication between the two VPCs, but it doesn’t address the requirement of secure access between separate AWS accounts. Additionally, it does not guarantee redundancy or eliminate bandwidth concerns.
B. Setting up VPC gateway endpoints is used for accessing AWS services like S3 or DynamoDB without the need to traverse the internet. It doesn’t provide direct access between EC2 instances in different VPCs, especially when they are in separate AWS accounts.
C. Attaching a virtual private gateway to VPC-B is part of a VPN connection and is typically used for connecting on-premises networks to a VPC. It doesn’t provide a direct, secure way to access EC2 instances in VPC-B from VPC-A in separate AWS accounts.
A company currently stores symmetric encryption keys in a hardware security module (HSM). A solutions architect must design a solution to migrate key management to AWS. The solution should allow for key rotation and support the use of customer provided keys.
Where should the key material be stored to meet these requirements?
A. Amazon S3
B. AWS Secrets Manager
C. AWS Systems Manager Parameter store
D. AWS Key Management Service (AWS KMS)
A. Amazon S3 is an object storage service and not designed for managing encryption keys directly. While you can encrypt data stored in S3 using AWS KMS, S3 itself is not a key management solution.
B. AWS Secrets Manager is designed for managing and rotating secrets, such as database passwords and API keys, but it is not the primary service for managing encryption keys.
C. AWS Systems Manager Parameter Store is used for storing configuration data and parameters. While it can store sensitive information, it is not a dedicated key management solution for encryption keys.
A recent analysis of a company’s IT expenses highlights the need to reduce backup costs. The company’s chief information officer wants to simplify the on-premises backup infrastructure and reduce costs by eliminating the use of physical backup tapes. The company must preserve the existing investment in the on- premises backup applications and workflows.
What should a solutions architect recommend?
A. Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.
B. Set up an Amazon EFS file system that connects with the backup applications using the NFS interface.
C. Set up an Amazon EFS file system that connects with the backup applications using the iSCSI interface.
D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.
A. Setting up AWS Storage Gateway with the NFS interface (file gateway) would provide file-based access to storage in AWS and might not be the best fit for backup applications that are typically designed to work with tape-based interfaces or block-based storage.
B. Setting up an Amazon EFS file system with NFS access would provide a scalable file system in AWS but may not seamlessly integrate with existing backup applications and workflows designed for tape-based backups.
C. Setting up an Amazon EFS file system with iSCSI access is not a typical configuration, and it does not provide the same compatibility and integration as the iSCSI-VTL interface for backup applications.
A company hosts an application on an Amazon EC2 instance that requires a maximum of 200 GB storage space. The application is used infrequently, with peaks during mornings and evenings. Disk I/O varies, but peaks at 3,000 IOPS. The chief financial officer of the company is concerned about costs and has asked a solutions architect to recommend the most cost-effective storage option that does not sacrifice performance.
Which solution should the solutions architect recommend?
A. Amazon Elastic Block Store (Amazon EBS) Cold HDD (sc1)
B. Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2)
C. Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1)
D. Amazon Elastic Block Store (Amazon EBS) Throughput Optimized HDD (st1)
A. Amazon EBS Cold HDD (sc1) is a cost-effective option with lower performance and is more suitable for infrequently accessed data that doesn’t require high IOPS. It may not meet the performance requirements during peak periods when disk I/O reaches 3,000 IOPS.
C. Amazon EBS Provisioned IOPS SSD (io1) allows you to provision a specific number of IOPS, but it may be overkill and more costly for this scenario if you provision the maximum required 3,000 IOPS continuously.
D. Amazon EBS Throughput Optimized HDD (st1) is optimized for sequential workloads and is not designed for high IOPS requirements. It may not meet the performance needs during peak I/O periods.
A company’s application hosted on Amazon EC2 instances needs to access an Amazon S3 bucket. Due to data sensitivity, traffic cannot traverse the internet.
How should a solutions architect configure access?
A. Create a private hosted zone using Amazon Route 53.
B. Configure a VPC gateway endpoint for Amazon S3 in the VPC.
C. Configure AWS PrivateLink between the EC2 instance and the S3 bucket.
D. Set up a site-to-site VPN connection between the VPC and the S3 bucket.
Option A, creating a private hosted zone using Amazon Route 53, is related to DNS resolution but does not provide a direct mechanism for accessing Amazon S3 without internet traffic.
Option C, configuring AWS PrivateLink, is a valid solution for private access to AWS services but typically involves setting up VPC endpoints for various AWS services within your VPC. It may be more complex compared to option B if your primary goal is to secure access to Amazon S3 specifically.
Option D, setting up a site-to-site VPN connection between the VPC and the S3 bucket, is not the most efficient solution for this scenario. It’s more complex, involves more overhead, and doesn’t provide the same level of direct, private access as a VPC gateway endpoint.
A company has two applications it wants to migrate to AWS. Both applications process a large set of files by accessing the same files at the same time. Both applications need to read the files with low latency.
Which architecture should a solutions architect recommend for this situation?
A. Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an instance store volume to store the data.
B. Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an Amazon Elastic Block Store (Amazon EBS) volume to store the data.
C. Configure one memory optimized Amazon EC2 instance to run both applications simultaneously. Create an Amazon Elastic Block Store (Amazon EBS) volume with Provisioned IOPS to store the data.
D. Configure two Amazon EC2 instances to run both applications. Configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data.
A. Using AWS Lambda functions for running the applications is not suitable for applications that need to process a large set of files simultaneously with low latency. EC2 instances would be better suited for this purpose.
B. Similar to option A, using AWS Lambda functions with an EC2 instance and Amazon EBS volume is not the best choice for processing large sets of files simultaneously with low latency.
C. Configuring one EC2 instance to run both applications may lead to resource contention and may not provide the desired low-latency access to the files, especially if both applications need to access the files at the same time.
An ecommerce company has noticed performance degradation of its Amazon RDS based web application. The performance degradation is attributed to an increase in the number of read-only SQL queries triggered by business analysts. A solutions architect needs to solve the problem with minimal changes to the existing web application.
What should the solutions architect recommend?
A. Export the data to Amazon DynamoDB and have the business analysts run their queries.
B. Load the data into Amazon ElastiCache and have the business analysts run their queries.
C. Create a read replica of the primary database and have the business analysts run their queries.
D. Copy the data into an Amazon Redshift cluster and have the business analysts run their queries.
A. Exporting the data to Amazon DynamoDB would require significant changes to the application to use DynamoDB as a data store, and it may not be the most efficient solution for running complex SQL queries.
B. Loading the data into Amazon ElastiCache (e.g., Redis or Memcached) is suitable for caching frequently accessed data but does not provide the capability to run complex SQL queries that business analysts need.
D. Copying the data into an Amazon Redshift cluster would involve significant changes to the application to use Redshift as a data warehouse. It is a suitable choice for analytics workloads but may not be the most straightforward solution in this context.
A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS database. Compliance regulations mandate that all personally identifiable information (PII) be encrypted at rest.
Which solution should a solutions architect recommend to meet this requirement with the LEAST amount of changes to the infrastructure?
A. Deploy AWS Certificate Manager to generate certificates. Use the certificates to encrypt the database volume.
B. Deploy AWS CloudHSM, generate encryption keys, and use the keys to encrypt database volumes.
C. Configure SSL encryption using AWS Key Management Service (AWS KMS) to encrypt database volumes.
D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.
A. Deploying AWS Certificate Manager and using certificates is primarily for securing data in transit, not for encrypting data at rest. It doesn’t address the requirement for encrypting PII data stored on Amazon EBS and RDS volumes.
B. Deploying AWS CloudHSM and generating encryption keys is a valid option for encryption, but it involves significant changes to the infrastructure, which goes against the requirement to make the least amount of changes.
C. Configuring SSL encryption with AWS KMS for encrypting data in transit doesn’t directly address the requirement for encrypting data at rest on Amazon EBS and RDS volumes.