AWS Skill Builder Flashcards
A company needs to look up configuration details about how a Linux-based Amazon EC2 instance was launched.
Which command should a solutions architect run on the EC2 instance to gather the system metadata?
A. curl http://169.254.169.254/latest/meta-data/
B. curl http://localhost/latest/meta-data/
C. curl http://254.169.254.169/latest/meta-data/
D. curl http://192.168.0.1/latest/meta-data/
A. curl http://169.254.169.254/latest/meta-data/
A company needs to maintain data records for a minimum of 5 years. The data is rarely accessed after it is stored. The data must be accessible within 2 hours.
Which solution will meet these requirements MOST cost-effectively?
A. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Access the data by using AWS Direct Connect.
B. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Create snapshots. Store the snapshots in an Amazon S3 bucket.
C. Store the data in an Amazon S3 bucket. Use an S3 Lifecycle policy to move the data to S3 Standard-Infrequent Access (S3 Standard-IA).
D. Store the data in an Amazon S3 bucket. Use an S3 Lifecycle policy to move the data to S3 Glacier Instant Retrieval.
D. Store the data in an Amazon S3 bucket. Use an S3 Lifecycle policy to move the data to S3 Glacier Instant Retrieval.
Correct. The storage of the data in an S3 bucket provides a cost-effective initial location for the data. S3 Glacier Instant Retrieval is the most cost-effective archival storage solution that meets the requirement of a 2-hour retrieval time.
For more information about how to move data between S3 storage classes automatically, see Managing your storage lifecycle.
For more information about S3 storage classes, see Using Amazon S3 storage classes.
A company has strict data protection requirements. A solutions architect must configure security for a VPC to ensure that backend Amazon RDS DB instances cannot be accessed from the internet. The solutions architect must ensure that the DB instances are accessible from the application tier over a specified port only.
Which actions should the solutions architect take to meet these requirements? (Select TWO.)
A. Specify a DB subnet group that contains only private subnets for the DB instances.
B. Attach an elastic network interface with a private IPv4 address to each DB instance.
C. Configure AWS Shield with the VPC. Update the route tables for the subnets that the DB instances use.
D. Configure an AWS Direct Connect connection on the database port between the application tier and the backend.
E. Add an inbound rule to the database security group that allows requests from the security group of the application tier over the database port. Remove other inbound rules.
A. Specify a DB subnet group that contains only private subnets for the DB instances.
Correct. A private subnet is one component to use to secure the database tier. Internet traffic is not routed to a private subnet. When you place DB instances in a private subnet, you add a layer of security.
For more information about VPCs with public subnets and private subnets, see Routing.
E. Add an inbound rule to the database security group that allows requests from the security group of the application tier over the database port. Remove other inbound rules.
Correct. Security groups can restrict access to the DB instances. Security groups provide access from only the application tier on only a specific port.
For more information about security groups, see Security group basics.
A company uses one AWS account to run production workloads. The company has a separate AWS account for its security team. During periodic audits, the security team needs to view specific account settings and resource configurations in the AWS account that runs production workloads. A solutions architect must provide the required access to the security team by designing a solution that follows AWS security best practices.
Which solution will meet these requirements?
A. Create an IAM user for each security team member in the production account. Attach a permissions policy that provides the permissions required by the security team to each user.
B. Create an IAM role in the production account. Attach a permissions policy that provides the permissions required by the security team. Add the security team account to the trust policy.
C. Create a new IAM user in the production account. Assign administrative privileges to the user. Allow the security team to use this account to log in to the systems that need to be accessed.
D. Create an IAM user for each security team member in the production account. Attach a permissions policy that provides the permissions required by the security team to a new IAM group. Assign the security team members to the group.
B. Create an IAM role in the production account. Attach a permissions policy that provides the permissions required by the security team. Add the security team account to the trust policy.
Correct. This solution follows security best practices by using a role to delegate permissions that consist of least-privilege access.
For more information about how to use roles to delegate permissions, see Require Human Users to Use Federation with an Identity Provider to Access AWS Using Temporary Credentials.
A company is investigating services to manage vulnerability scans in Amazon EC2 instances and container images that the company stores in Amazon Elastic Container Registry (Amazon ECR). The service should identify potential software vulnerabilities and categorize the severity of the vulnerabilities.
Which AWS service will meet these requirements?
A. Amazon GuardDuty
B Patch Manager, a capability of AWS Systems Manager
C. Amazon Inspector
D. AWS Config
C. Amazon Inspector
Correct. Amazon Inspector removes the operational overhead that is necessary to configure a vulnerability management solution. Amazon Inspector works with both EC2 instances and container images in Amazon ECR to identify potential software vulnerabilities and to categorize the severity of the vulnerabilities.
A company that processes satellite images has an application that runs on AWS. The company stores the images in an Amazon S3 bucket. For compliance reasons, the company must replicate all data once a month to an on-premises location. The average amount of data that the company needs to transfer is 60 TB.
What is the MOST cost-effective way to transfer this data?
A. Export the data monthly from the existing S3 bucket to an AWS Snowball Edge Storage Optimized device. Ship the device to the on-premises location. Transfer the data. Return the device a week later.
B. Use S3 bucket replication to copy all objects to a new S3 bucket that uses S3 Standard-Infrequent Access (S3 Standard-IA) storage. Use an AWS Storage Gateway File Gateway to transfer the data from the new S3 bucket to the on-premises location. Delete the images from the new S3 bucket after the transfer of the data.
C. Use S3 bucket replication to copy all objects to a new S3 bucket that uses S3 Standard-Infrequent Access (S3 Standard-IA) storage. Use Amazon S3 to transfer the data from the new S3 bucket to the on-premises location. Delete the images from the new S3 bucket after the transfer of the data.
D. Create an Amazon CloudFront distribution for the objects in the existing S3 bucket. Download the objects from CloudFront to the on-premises location every month.
A. Export the data monthly from the existing S3 bucket to an AWS Snowball Edge Storage Optimized device. Ship the device to the on-premises location. Transfer the data. Return the device a week later.
Correct. The base price covers the device and 10 days of usage at the on-premises location. If the company returns the device within a week, the company pays the base price and the price for data transfer out of AWS.
Which components are required to build a site-to-site VPN connection to AWS? (Select TWO.)
A. An internet gateway
B. A NAT gateway
C. A customer gateway
D. Amazon API Gateway
E. A virtual private gateway
C. A customer gateway
Correct. A customer gateway is required for the VPN connection to be established. A customer gateway device is set up and configured in the customer’s data center.
E. A virtual private gateway
Correct. A virtual private gateway is attached to a VPC to create a site-to-site VPN connection to AWS. You can accept private encrypted network traffic from an on-premises data center into your VPC without the need to traverse the open public internet.
A company has an on-premises application that exports log files about users of a website. The log files range from 20 GB to 30 GB in size. A solutions architect has created an Amazon S3 bucket to store the files. The files will be uploaded directly from the application. The network connection experiences intermittent failures, and the upload sometimes fails. The solutions architect must design a solution that resolves this issue. The solution must minimize operational overhead.
Which solution will meet these requirements?
A. Enable S3 Transfer Acceleration.
B. Copy the files to an Amazon EC2 instance in the closest AWS Region. Use S3 Lifecycle policies to copy the log files to Amazon S3.
C. Use multipart upload to Amazon S3.
D. Upload the files to two AWS Regions simultaneously. Enable two-way Cross-Region Replication between the two Regions.
C. Use multipart upload to Amazon S3.
Correct. You can use a multipart upload to upload larger files, such as the files in this scenario. If transmission of any part fails, you can retransmit that part without affecting other parts.
For more information about multipart uploads, see Uploading and Copying Objects Using Multipart Upload.
A company is transitioning its Amazon EC2 based MariaDB database to Amazon RDS. The company has already identified a database instance type that will meet the company’s CPU and memory requirements. The database must provide at least 40 GiB of storage capacity and 1,000 IOPS.
Which storage configuration for the Amazon RDS for MariaDB instance is MOST cost-effective?
A. Provision 350 GiB of magnetic storage for the RDS instance.
B. Provision 50 GiB of General Purpose SSD (gp3) storage for the RDS instance.
C. Provision 334 GiB of General Purpose SSD (gp2) storage for the RDS instance.
D. Provision 50 GiB of Provisioned IOPS storage with 1,000 IOPS for the RDS instance.
B. Provision 50 GiB of General Purpose SSD (gp3) storage for the RDS instance.
Correct. General Purpose SSD (gp3) includes 3,000 IOPS at no additional cost independent of volume size.
For more information about EBS pricing, see Amazon EBS Pricing.
For more information about General Purpose SSD storage, see General Purpose SSD Storage.
An application runs on two Amazon EC2 instances behind a Network Load Balancer. The EC2 instances are in a single Availability Zone.
What should a solutions architect do to make this architecture more highly available?
A. Create a new VPC with two new EC2 instances in the same Availability Zone as the original EC2 instances. Create a VPC peering connection between the two VPCs
B. Replace the Network Load Balancer with an Application Load Balancer that is configured with the EC2 instances in an Auto Scaling group.
C. Configure Amazon Route 53 to perform health checks on the EC2 instances behind the Network Load Balancer. Add a failover routing policy.
D. Place the EC2 instances in an Auto Scaling group that extends across multiple Availability Zones. Designate the Auto Scaling group as the target of the Network Load Balancer.
D. Place the EC2 instances in an Auto Scaling group that extends across multiple Availability Zones. Designate the Auto Scaling group as the target of the Network Load Balancer.
Correct. This solution extends the EC2 instances across multiple Availability Zones and automatically adds capacity when additional capacity is needed.
For more information about Amazon EC2 Auto Scaling, see Amazon EC2 Auto Scaling benefits.
A company runs its website on Amazon EC2 instances behind an Application Load Balancer that is configured as the origin for an Amazon CloudFront distribution. The company wants to protect against cross-site scripting and SQL injection attacks.
Which approach should a solutions architect recommend to meet these requirements?
A. Enable AWS Shield Advanced. List the CloudFront distribution as a protected resource.
B. Define an AWS Shield Advanced policy in AWS Firewall Manager to block cross-site scripting and SQL injection attacks.
C. Deploy AWS Firewall Manager on the EC2 instances. Create conditions and rules that block cross-site scripting and SQL injection attacks.
D. Set up AWS WAF on the CloudFront distribution. Use conditions and rules that block cross-site scripting and SQL injection attacks.
D. Set up AWS WAF on the CloudFront distribution. Use conditions and rules that block cross-site scripting and SQL injection attacks.
Correct. AWS WAF can detect the presence of SQL code that is likely to be malicious (known as SQL injection). AWS WAF also can detect the presence of a script that is likely to be malicious (known as cross-site scripting).
For more information about AWS WAF, see AWS WAF.
A reporting application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. For complex reports, the application can take up to 15 minutes to respond to a request. A solutions architect is concerned that users will receive HTTP 5xx errors if a report request is in process during a scale-in event.
What should the solutions architect do to ensure that user requests will be completed before instances are terminated?
A. Enable sticky sessions (session affinity) for the target group of the instances.
B. Increase the instance size in the Application Load Balancer target group.
C. Increase the cooldown period for the Auto Scaling group to a greater amount of time than the time required for the longest running responses.
D. Increase the deregistration delay timeout for the target group of the instances to greater than 900 seconds.
D. Increase the deregistration delay timeout for the target group of the instances to greater than 900 seconds.
Correct. By default, the Application Load Balancer waits 300 seconds before the completion of the deregistration process, which can help in-flight requests to the target become complete. To change the amount of time that the Application Load Balancer waits, update the deregistration delay value.
For more information about deregistration delay, see Deregistration Delay.
A company is designing a disaster recovery (DR) architecture for an important application on AWS. The company has determined that the recovery time objective (RTO) is 5 minutes with a minimal running instance capacity to support the application in the AWS DR site. The company needs to minimize costs for the DR architecture.
Which DR strategy will meet these requirements?
A. Warm standby
B. Pilot light
C. Multi-site active-active
D. Backup and restore
A. Warm standby
Correct. This solution meets the requirement for an RTO of 5 minutes with a minimal running instance. The instances run at a low capacity and can scale within minutes.
For more information about warm standby, see Plan for Disaster Recovery (DR).
B. Pilot light
Incorrect. This solution would not meet the requirement for a minimal running instance. The instances are stopped.
C. Multi-site active-active
Incorrect. Because this is an active-active environment, this solution would address the requirement for an RTO within minutes. The services are already running at full capacity within that time. However, this solution costs more than is necessary to meet the company’s requirements.
D. Backup and restore
Incorrect. This solution would not meet the requirement for a minimal running instance. The instances would need to be created, and backup and restore strategies take hours to run.
A company is developing a chat application that will be deployed on AWS. The application stores the messages by using a key-value data model. Groups of users typically read the messages multiple times. A solutions architect must select a database solution that will scale for a high rate of reads and will deliver messages with microsecond latency.
Which database solution will meet these requirements?
A. Amazon Aurora with Aurora Replicas
B. Amazon DynamoDB with DynamoDB Accelerator (DAX)
C. Amazon Aurora with Amazon ElastiCache for Memcached
D. Amazon Neptune with Amazon ElastiCache for Memcached
A. Amazon Aurora with Aurora Replicas
Incorrect. Aurora is a relational database (not a key-value database). Aurora is not likely to achieve microsecond latency consistently.
For more information about Aurora, see What is Amazon Aurora?
B. Amazon DynamoDB with DynamoDB Accelerator (DAX)
Correct. DynamoDB is a NoSQL database that supports key-value records. DAX delivers response times in microseconds.
For more information about DynamoDB, see What is Amazon DynamoDB?
For more information about DAX, see In-Memory Acceleration with DynamoDB Accelerator (DAX).
C. Amazon Aurora with Amazon ElastiCache for Memcached
Incorrect. Aurora is a relational database (not a key-value database). Aurora is not likely to achieve microsecond latency consistently, even with ElastiCache.
For more information about Aurora, see What is Amazon Aurora?
For more information about ElastiCache for Memcached, see What is Amazon ElastiCache for Memcached?
D. Amazon Neptune with Amazon ElastiCache for Memcached
Incorrect. Neptune is a graph database that is optimized for working with highly connected data. Neptune is not optimized for simple key-value data.
For more information about Neptune, see What Is Amazon Neptune?
A media company is designing a new application for graphic rendering. The application requires up to 400 GB of storage for temporary data that is discarded after the frames are rendered. The application requires approximately 40,000 random IOPS to perform the rendering.
What is the MOST cost-effective storage option for this rendering application?
A. A storage optimized Amazon EC2 instance with instance store storage
B. A storage optimized Amazon EC2 instance with a Provisioned IOPS SSD (io1 or io2) Amazon Elastic Block Store (Amazon EBS) volume
C. A burstable Amazon EC2 instance with a Throughput Optimized HDD (st1) Amazon Elastic Block Store (Amazon EBS) volume
D. A burstable Amazon EC2 instance with Amazon S3 storage over a VPC endpoint
A. A storage optimized Amazon EC2 instance with instance store storage
Correct. Storage optimized instances are designed for workloads that require high, sequential read and write access to very large datasets on local storage. These instances are optimized to provide applications with tens of thousands of low-latency, random IOPS. The instance store has no additional cost.
For more information about storage optimized instances, see Storage Optimized Instances.
For more information about instance stores, see Amazon EC2 Instance Store.
B. A storage optimized Amazon EC2 instance with a Provisioned IOPS SSD (io1 or io2) Amazon Elastic Block Store (Amazon EBS) volume
Incorrect. Provisioned IOPS SSD (io1 or io2) EBS volumes can deliver more than the 40,000 IOPS that are required in the scenario. However, this solution is not as cost-effective as an instance store because Amazon EBS adds cost to the hourly instance rate. This solution provides persistence of data beyond the lifecycle of the instance, but persistence is not required in this use case.
For more information about Provisioned IOPS SSD (io1 or io2) EBS volumes, see Provisioned IOPS SSD Volumes.
For more information about pricing for Amazon EBS, see Amazon EBS Pricing.
C. A burstable Amazon EC2 instance with a Throughput Optimized HDD (st1) Amazon Elastic Block Store (Amazon EBS) volume
Incorrect. Throughput Optimized HDD (st1) EBS volumes are engineered to maximize the throughput of data that can be sent to and from a volume, not the random IOPS. Consequently, this solution does not meet the IOPS requirement. Additionally, Amazon EBS adds cost to the hourly instance rate. This solution provides persistence of data beyond the lifecycle of the instance, but persistence is not required in this use case.
For more information about Throughput Optimized HDD (st1) EBS volumes, see Throughput Optimized HDD and Cold HDD Volumes.
For more information about pricing for Amazon EBS, see Amazon EBS Pricing.
D. A burstable Amazon EC2 instance with Amazon S3 storage over a VPC endpoint
Incorrect. Amazon S3 (object storage) is not the most suitable solution for rapidly changing data that is required for the scratch volume space. Block storage is appropriate for the read/write functionality to work smoothly.
For more information about usage patterns for Amazon S3, see Performance Design Patterns for Amazon S3.