Neal Davis - Practice Test 6 - Correct Flashcards
Question 2:
A Solutions Architect needs to upload a large (2GB) file to an S3 bucket. What is the recommended way to upload a single large file to an S3 bucket?
A. Use AWS Import/Export
B. Use a single PUT request to upload the large file
C. Use Amazon Snowball
D. Use Multipart Upload
Explanation
In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.
CORRECT: “Use Multipart Upload” is the correct answer.
INCORRECT: “Use AWS Import/Export” is incorrect. AWS Import/Export is a service in which you send in HDDs with data on to AWS and they import your data into S3. It is not used for single files.
INCORRECT: “Use a single PUT request to upload the large file” is incorrect. The largest object that can be uploaded in a single PUT is 5 gigabytes.
INCORRECT: “Use Amazon Snowball” is incorrect. Snowball is used for migrating large quantities (TB/PB) of data into AWS, it is overkill for this requirement.
Question 3:
A Solutions Architect enabled Access Logs on an Application Load Balancer (ALB) and needs to process the log files using a hosted Hadoop service. What configuration changes and services can be leveraged to deliver this requirement?
A. Configure Access Logs to be delivered to DynamoDB and use EMR for processing the log files
B. Configure Access Logs to be delivered to S3 and use EMR for processing the log files
C. Configure Access Logs to be delivered to S3 and use Kinesis for processing the log files
D. Configure Access Logs to be delivered to EC2 and install Hadoop for processing the log files
Explanation
Access Logs can be enabled on ALB and configured to store data in an S3 bucket. Amazon EMR is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. EMR utilizes a hosted Hadoop framework running on Amazon EC2 and Amazon S3.
CORRECT: “Configure Access Logs to be delivered to S3 and use EMR for processing the log files” is the correct answer.
INCORRECT: “Configure Access Logs to be delivered to EC2 and install Hadoop for processing the log files” is incorrect. EC2 does not provide a hosted Hadoop service.
INCORRECT: “Configure Access Logs to be delivered to DynamoDB and use EMR for processing the log files” is incorrect. You cannot configure access logs to be delivered to DynamoDB.
INCORRECT: “Configure Access Logs to be delivered to S3 and use Kinesis for processing the log files” is incorrect. Kinesis does not provide a hosted Hadoop service.
Question 5:
An Amazon EC2 instance has been launched into an Amazon VPC. A Solutions Architect needs to ensure that instances have both a private and public DNS hostnames. Assuming settings were not changed during creation of the VPC, how will DNS hostnames be assigned by default? (choose 2)
A. In a non-default VPC instances will be assigned a public and private DNS hostname
B. In a default VPC instances will be assigned a private but not public DNS hostname
C. In a non-default VPC instances will be assigned a private but not public DNS hostname
D. In all VPC instances no DNS hostnames will be assigned
E. In a default VPC instances will be assigned a public and private DNS hostname
Explanation
When you launch an instance into a default VPC, we provide the instance with public and private DNS hostnames that correspond to the public IPv4 and private IPv4 addresses for the instance.
When you launch an instance into a nondefault VPC, we provide the instance with a private DNS hostname and we might provide a public DNS hostname, depending on the DNS attributes you specify for the VPC and if your instance has a public IPv4 address.
All other statements are incorrect with default settings.
CORRECT: “In a default VPC instances will be assigned a public and private DNS hostname” is the correct answer.
CORRECT: “In a non-default VPC instances will be assigned a private but not a public DNS hostname” is the correct answer.
INCORRECT: “In all VPCs instances no DNS hostnames will be assigned” is incorrect as explained above.
INCORRECT: “In a non-default VPC instances will be assigned a public and private DNS hostname” is incorrect as explained above.
INCORRECT: “In a default VPC instances will be assigned a private but not a public DNS hostname” is incorrect as explained above.
Question 6:
The application development team in a company have developed a Java application and saved the source code in a .war file. They would like to run the application on AWS resources and are looking for a service that can handle the provisioning and management of the underlying resources it will run on.
Which AWS service should a Solutions Architect recommend the Developers use to upload the Java source code file?
A. AWS CloudFormation
B. AWS CodeDeploy
C. AWS OpsWorks
D. AWS Elastic Beanstalk
Explanation
AWS Elastic Beanstalk can be used to quickly deploy and manage applications in the AWS Cloud. Developers upload applications and Elastic Beanstalk handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring
Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby, as well as different platform configurations for each language. To use Elastic Beanstalk, you create an application, upload an application version in the form of an application source bundle (for example, a Java .war file) to Elastic Beanstalk, and then provide some information about the application.
CORRECT: “AWS Elastic Beanstalk” is the correct answer.
INCORRECT: “AWS CodeDeploy” is incorrect. AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services.
INCORRECT: “AWS CloudFormation” is incorrect. AWS CloudFormation uses templates to deploy infrastructure as code. It is not a PaaS service like Elastic Beanstalk and is more focused on infrastructure than applications and management of applications.
INCORRECT: “AWS OpsWorks” is incorrect. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet.
Question 9:
A security officer has requested that all data associated with a specific customer is encrypted. The data resides on Elastic Block Store (EBS) volumes. Which of the following statements about using EBS encryption are correct? (choose 2)
A. There is no direct way to change the encryption state of a volume
B. All instance types support encryption
C. Not all EBS types support encryption
D. All attached EBS volumes must share the same encryption state
E. Data in transit between an instance and encrypted volume is also encrypted
Explanation
All EBS types and all instance families support encryption but not all instance types support encryption. There is no direct way to change the encryption state of a volume. Data in transit between an instance and an encrypted volume is also encrypted.
CORRECT: “Data in transit between an instance and an encrypted volume is also encrypted” is the correct answer.
CORRECT: “There is no direct way to change the encryption state of a volume” is the correct answer.
INCORRECT: “Not all EBS types support encryption” is incorrect as all EBS volume types support encryption.
INCORRECT: “All attached EBS volumes must share the same encryption state” is incorrect. You can have encrypted and non-encrypted EBS volumes on a single instance.
INCORRECT: “All instance types support encryption” is incorrect. All instance families support encryption, but not all instance types.
Question 11:
A web application runs on a series of Amazon EC2 instances behind an Application Load Balancer (ALB). A Solutions Architect is updating the configuration with a health check and needs to select the protocol to use. What options are available? (choose 2)
A. HTTP
B. HTTPS
C. ICMP
D. TCP
E. SSL
Explanation
An Application Load Balancer periodically sends requests to its registered targets to test their status. These tests are called health checks.
Each load balancer node routes requests only to the healthy targets in the enabled Availability Zones for the load balancer. Each load balancer node checks the health of each target, using the health check settings for the target groups with which the target is registered. After your target is registered, it must pass one health check to be considered healthy. After each health check is completed, the load balancer node closes the connection that was established for the health check.
If a target group contains only unhealthy registered targets, the load balancer nodes route requests across its unhealthy targets.
For an ALB the possible protocols are HTTP and HTTPS. The default is the HTTP protocol.
CORRECT: “HTTP” is the correct answer.
CORRECT: “HTTPS” is the correct answer.
INCORRECT: “SSL” is incorrect as this is not supported by the ALB.
INCORRECT: “TCP” is incorrect as this is not supported by the ALB.
INCORRECT: “ICMP” is incorrect as this is not supported by the ALB.
Question 12:
A government agency is using CloudFront for a web application that receives personally identifiable information (PII) from citizens.
What feature of CloudFront applies an extra level of encryption at CloudFront edge locations to ensure the PII data is secured end-to-end?
A. Origin access identity
B. Object invalidation
C. Field-level encryption
D. RTMP distribution
Explanation
With Amazon CloudFront, you can enforce secure end-to-end connections to origin servers by using HTTPS. Field-level encryption adds an additional layer of security that lets you protect specific data throughout system processing so that only certain applications can see it.
Field-level encryption allows you to enable your users to securely upload sensitive information to your web servers. The sensitive information provided by your users is encrypted at the edge, close to the user, and remains encrypted throughout your entire application stack. This encryption ensures that only applications that need the data—and have the credentials to decrypt it—are able to do so.
CORRECT: “Field-level encryption” is the correct answer.
INCORRECT: “Object invalidation” is incorrect. Object invalidation is a method to remove objects from the cache.
INCORRECT: “RTMP distribution” is incorrect. An RTMP distribution is a method of streaming media using Adobe Flash.
INCORRECT: “Origin access identity” is incorrect. Origin access identity applies to S3 bucket origins, not web servers.
Question 13:
A company is transitioning their web presence into the AWS cloud. As part of the migration the company will be running a web application both on-premises and in AWS for a period of time. During the period of co-existence the client would like 80% of the traffic to hit the AWS-based web servers and 20% to be directed to the on-premises web servers.
What method can a Solutions Architect use to distribute traffic as requested?
A. Use a Network Load Balancer to distribute traffic based on Instance ID
B. Use an Application Load Balancer to distribute traffic based on IP address
C. Use Route 53 with a simple routing policy
D. Use Route 53 with a weighted routing policy and configure the respective weights
Explanation
Route 53 weighted routing policy is similar to simple but you can specify a weight per IP address. You create records that have the same name and type and assign each record a relative weight which is a numerical value that favours one IP over another (values must total 100). To stop sending traffic to a resource you can change the weight of the record to 0.
CORRECT: “Use Route 53 with a weighted routing policy and configure the respective weights” is the correct answer.
INCORRECT: “Use Route 53 with a simple routing policy” is incorrect as this will not split traffic based on weights as required.
INCORRECT: “Use an Application Load Balancer to distribute traffic based on IP address” is incorrect. Application Load Balancer can distribute traffic to AWS and on-premise resources using IP addresses but cannot be used to distribute traffic in a weighted manner.
INCORRECT: “Use a Network Load Balancer to distribute traffic based on Instance ID” is incorrect. Network Load Balancer can distribute traffic to AWS and on-premise resources using IP addresses (not Instance IDs).
Question 15:
A company runs an application on premises that stores a large quantity of semi-structured data using key-value pairs. The application code will be migrated to AWS Lambda and a highly scalable solution is required for storing the data.
Which datastore will be the best fit for these requirements?
A. Amazon RDS MySQL
B. Amazon DynamoDB
C. Amazon EFS
D. Amazon EBS
Explanation
Amazon DynamoDB is a no-SQL database that stores data using key-value pairs. It is ideal for storing large amounts of semi-structured data and is also highly scalable. This is the best solution for storing this data based on the requirements in the scenario.
CORRECT: “Amazon DynamoDB” is the correct answer.
INCORRECT: “Amazon EFS” is incorrect. The Amazon Elastic File System (EFS) is not suitable for storing key-value pairs.
INCORRECT: “Amazon RDS MySQL” is incorrect. Amazon Relational Database Service (RDS) is used for structured data as it is an SQL type of database.
INCORRECT: “Amazon EBS” is incorrect. Amazon Elastic Block Store (EBS) is a block-based storage system. You attach volumes to EC2 instances. It is not used for key-value pairs or to be used by Lambda functions.
Question 16:
A distribution method is required for some static files. The requests will mainly be GET requests and a high volume of GETs is expected, often exceeding 2000 per second. The files are currently stored in an S3 bucket. According to AWS best practices, how can performance be optimized?
A. Use ElastiCache to cache the content
B. Use S3 Transfer Acceleration
C. Use cross-region replication to spread the load across regions
D. Integrate CloudFront with S3 to cache the content
Explanation
Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/POST/DELETE and 5,500 GET requests per second per prefix in a bucket. There are no limits to the number of prefixes in a bucket
If your workload is mainly sending GET requests, in addition to the preceding guidelines, you should consider using Amazon CloudFront for performance optimization. By integrating CloudFront with Amazon S3, you can distribute content to your users with low latency and a high data transfer rate.
CORRECT: “Integrate CloudFront with S3 to cache the content” is the correct answer.
INCORRECT: “Use cross-region replication to spread the load across regions” is incorrect. Cross-region replication creates a replica copy in another region but should not be used for spreading read requests across regions. There will be 2 S3 endpoints and CRR is not designed for 2 way sync so this would not work well.
INCORRECT: “Use ElastiCache to cache the content” is incorrect. CloudFront is a better fit for this use case than using ElastiCache.
INCORRECT: “Use S3 Transfer Acceleration” is incorrect. Transfer Acceleration is used to accelerate object uploads to S3 over long distances (latency).
Question 17:
A Solutions Architect is launching an Amazon EC2 instance with multiple attached volumes by modifying the block device mapping. Which block device can be specified in a block device mapping to be used with an EC2 instance? (choose 2)
A. EBS Volume
B. Snapshot
C. Instance store volume
D. S3 bucket
E. EFS Volume
Explanation
Each instance that you launch has an associated root device volume, either an Amazon EBS volume or an instance store volume.
You can use block device mapping to specify additional EBS volumes or instance store volumes to attach to an instance when it’s launched. You can also attach additional EBS volumes to a running instance.
You cannot use a block device mapping to specify a snapshot, EFS volume or S3 bucket.
CORRECT: “EBS volume” is a correct answer.
CORRECT: “Instance store volume” is also a correct answer.
INCORRECT: “EFS volume” is incorrect as described above.
INCORRECT: “Snapshot” is incorrect as described above.
INCORRECT: “S3 bucket” is incorrect as described above.
Question 19:
A Solutions Architect has created a VPC and is in the process of formulating the subnet design. The VPC will be used to host a two-tier application that will include Internet facing web servers, and internal-only DB servers. Zonal redundancy is required.
How many subnets are required to support this requirement?
A. 4 Subnets
B. 1 Subnet
C. 6 Subnets
D. 2 Subnets
Explanation
Zonal redundancy indicates that the architecture should be split across multiple Availability Zones. Subnets are mapped 1:1 to AZs.
A public subnet should be used for the Internet-facing web servers and a separate private subnet should be used for the internal-only DB servers. Therefore you need 4 subnets – 2 (for redundancy) per public/private subnet.
CORRECT: “4 subnets” is the correct answer.
INCORRECT: “2 subnets” is incorrect as explained above.
INCORRECT: “6 subnets” is incorrect as explained above.
INCORRECT: “2 subnet” is incorrect as explained above.
Question 23:
An application runs on EC2 instances in a private subnet behind an Application Load Balancer in a public subnet. The application is highly available and distributed across multiple AZs. The EC2 instances must make API calls to an internet-based service. How can the Solutions Architect enable highly available internet connectivity?
A. Configure an internet gateway. Add a route to the gateway to each private subnet route table
B. Create a NAT gateway in the public subnet of each AZ. Update the route tables for each private subnet to direct internet-bound traffic to the NAT gateway
C. Create a NAT gateway and attach it to the VPC. Add a route to the gateway to each private subnet route table
D. Create a NAT instance in the private subnet of each AZ. Update the route tables for each private subnet to direct internet-bound traffic to the NAT instance
Explanation
The only solution presented that actually works is to create a NAT gateway in the public subnet of each AZ. They must be created in the public subnet as they gain public IP addresses and use an internet gateway for internet access.
The route tables in the private subnets must then be configured with a route to the NAT gateway and then the EC2 instances will be able to access the internet (subject to security group configuration).
CORRECT: “Create a NAT gateway in the public subnet of each AZ. Update the route tables for each private subnet to direct internet-bound traffic to the NAT gateway” is the correct answer.
INCORRECT: “Create a NAT gateway and attach it to the VPC. Add a route to the gateway to each private subnet route table” is incorrect. You do not attach NAT gateways to VPCs, you add them to public subnets.
INCORRECT: “Configure an internet gateway. Add a route to the gateway to each private subnet route table” is incorrect. You cannot add a route to an internet gateway to a private subnet route table (private EC2 instances don’t even have public IP addresses).
INCORRECT: “Create a NAT instance in the private subnet of each AZ. Update the route tables for each private subnet to direct internet-bound traffic to the NAT instance” is incorrect. You do not create NAT instances in private subnets, they must be created in public subnets.
Question 24:
The database layer of an on-premises web application is being migrated to AWS. The database currently uses an in-memory cache. A Solutions Architect must deliver a solution that supports high availability and replication for the caching layer.
Which service should the Solutions Architect recommend?
A. Amazon ElastiCache Redis
B. Amazon DynamoDB
C. Amazon RDS Multi-AZ
D. Amazon ElastiCache Memcached
Explanation
Amazon ElastiCache Redis is an in-memory database cache and supports high availability through replicas and multi-AZ. The table below compares ElastiCache Redis with Memcached:
CORRECT: “Amazon ElastiCache Redis” is the correct answer.
INCORRECT: “Amazon ElastiCache Memcached” is incorrect as it does not support high availability or multi-AZ.
INCORRECT: “Amazon RDS Multi-AZ” is incorrect. This is not an in-memory database and it not suitable for use as a caching layer.
INCORRECT: “Amazon DynamoDB” is incorrect. DynamoDB is a non-relational database. You would not use it for a caching layer. Also, the in-memory, low-latency caching for DynamoDB is implemented using DynamoDB Accelerator (DAX).
Question 25:
A web application receives order processing information from customers and places the messages on an Amazon SQS queue. A fleet of Amazon EC2 instances are configured to pick up the messages, process them, and store the results in a DynamoDB table. The current configuration has been resulting in a large number of empty responses to ReceiveMessage API requests.
A Solutions Architect needs to eliminate empty responses to reduce operational overhead. How can this be done?
A. Configure Long Polling to eliminate empty responses by allowing Amazon SQS to wait until a message is available in a queue before sending a response
B. Use a Standard queue to provide at-least-once delivery, which means that each message is delivered at least once
C. Configure Short Polling to eliminate empty responses by reducing the length of time a connection request remains open
D. Use a FIFO queue to preserve the exact order in which messages are sent and received
Explanation
The correct answer is to use Long Polling which will eliminate empty responses by allowing Amazon SQS to wait until a message is available in a queue before sending a response.
The problem does not relate to the order in which the messages are processed in and there are no concerns over messages being delivered more than once so it doesn’t matter whether you use a FIFO or standard queue.
Long Polling:
– Uses fewer requests and reduces cost.
– Eliminates false empty responses by querying all servers.
– SQS waits until a message is available in the queue before sending a response.
Short Polling:
– Does not wait for messages to appear in the queue.
– It queries only a subset of the available servers for messages (based on weighted random execution).
– Short polling is the default.
– ReceiveMessageWaitTime is set to 0.
CORRECT: “Configure Long Polling to eliminate empty responses by allowing Amazon SQS to wait until a message is available in a queue before sending a response” is the correct answer.
INCORRECT: “Use a Standard queue to provide at-least-once delivery, which means that each message is delivered at least once” is incorrect as explained above.
INCORRECT: “Use a FIFO (first-in-first-out) queue to preserve the exact order in which messages are sent and received” is incorrect as explained above.
INCORRECT: “Configure Short Polling to eliminate empty responses by reducing the length of time a connection request remains open” is incorrect as explained above.
Question 27:
A Solutions Architect is deploying a production application that will use several Amazon EC2 instances and run constantly on an ongoing basis. The application cannot be interrupted or restarted. Which EC2 pricing model would be best for this workload?
A. Reserved instances
B. On-demand instances
C. Spot instances
D. Flexible instances
Explanation
In this scenario for a stable process that will run constantly on an ongoing basis RIs will be the most affordable solution.
RIs provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. You have the flexibility to change families, OS types, and tenancies while benefitting from RI pricing when you use Convertible RIs.
CORRECT: “Reserved instances” is the correct answer.
INCORRECT: “On-demand instances” is incorrect. On-demand is useful for short term ad-hoc requirements for which the job cannot afford to be interrupted and are typically more expensive than Spot instances.
INCORRECT: “Spot instances” is incorrect. Spot is more suited to short term jobs that can afford to be interrupted and offer the lowest price of all options.
INCORRECT: “Flexible instances” is incorrect. There’s no such thing as flexible instances.