Neal Davis - Practice Test 1 - Correct Flashcards
Question 1:
A new application will run across multiple Amazon ECS tasks. Front-end application logic will process data and then pass that data to a back-end ECS task to perform further processing and write the data to a datastore. The Architect would like to reduce-interdependencies so failures do no impact other components.
Which solution should the Architect use?
A. Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-end to add messages to the queue
B. Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages
C. Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3 bucket, configure the front-end to write data to the stream and the back-end to read data from Amazon S3
D. Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data to the stream and the back-end to read data from the stream
Explanation
This is a good use case for Amazon SQS. SQS is a service that is used for decoupling applications, thus reducing interdependencies, through a message bus. The front-end application can place messages on the queue and the back-end can then poll the queue for new messages. Please remember that Amazon SQS is pull-based (polling) not push-based (use SNS for push-based).
CORRECT: “Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages” is the correct answer.
INCORRECT: “Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data to the stream and the back-end to read data from the stream” is incorrect. Amazon Kinesis Firehose is used for streaming data. With Firehose the data is immediately loaded into a destination that can be Amazon S3, RedShift, Elasticsearch, or Splunk. This is not an ideal use case for Firehose as this is not streaming data and there is no need to load data into an additional AWS service.
INCORRECT: “Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3 bucket, configure the front-end to write data to the stream and the back-end to read data from Amazon S3” is incorrect as per the previous explanation.
INCORRECT: “Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-end to add messages to the queue “ is incorrect as SQS is pull-based, not push-based. EC2 instances must poll the queue to find jobs to process.
Question 2
A surveying team is using a fleet of drones to collect images of construction sites. The surveying team’s laptops lack the inbuilt storage and compute capacity to transfer the images and process the data. While the team has Amazon EC2 instances for processing and Amazon S3 buckets for storage, network connectivity is intermittent and unreliable. The images need to be processed to evaluate the progress of each construction site.
What should a solutions architect recommend?
A. Process and store the images using AWS Snowball Edge devices.
B. Configure Amazon Kinesis Data Firehose to create multiple delivery streams aimed separately at the S3 buckets for storage and the EC2 instances for processing the images.
C. During intermittent connectivity to EC2 instances, upload images to Amazon SQS.
D. Cache the images locally on a hardware appliance pre-installed with AWS Storage Gateway to process the images when connectivity is restored.
Explanation
AWS physical Snowball Edge device will provide much more inbuilt compute and storage compared to the current team’s laptops. This negates the need to rely on a stable connection to process any images and solves the team’s problems easily and efficiently.
CORRECT: “Process and store the images using AWS Snowball Edge devices” is the correct answer (as explained above.)
INCORRECT: “During intermittent connectivity to EC2 instances, upload images to Amazon SQS” is incorrect as you would still need a reliable internet connection to upload any images to Amazon SQS.
INCORRECT: “Configure Amazon Kinesis Data Firehose to create multiple delivery streams aimed separately at the S3 buckets for storage and the EC2 instances for processing the images” is incorrect as you would still need a reliable internet connection to upload any images to the Amazon Kinesis Service.
INCORRECT: “Cache the images locally on a hardware appliance pre-installed with AWS Storage Gateway to process the images when connectivity is restored” is incorrect as you would still need reliable internet connection to upload any images to the Amazon Storage Gateway service.
Question 3
A developer created an application that uses Amazon EC2 and an Amazon RDS MySQL database instance. The developer stored the database user name and password in a configuration file on the root EBS volume of the EC2 application instance. A Solutions Architect has been asked to design a more secure solution.
What should the Solutions Architect do to achieve this requirement?
A. Install an Amazon-trusted root certificate on the application instance and use SSL/TLS encrypted connections to the database.
B. Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance.
C. Attach an additional volume to the EC2 instance with encryption enabled. Move the configuration file to the encrypted volume.
D. Move the configuration file to an Amazon S3 bucket. Create an IAM role with permission to the bucket and attach it to the EC2 instance.
Explanation
The key problem here is having plain text credentials stored in a file. Even if you encrypt the volume there is still as security risk as the credentials are loaded by the application and passed to RDS.
The best way to secure this solution is to get rid of the credentials completely by using an IAM role instead. The IAM role can be assigned permissions to the database instance and can be attached to the EC2 instance. The instance will then obtain temporary security credentials from AWS STS which is much more secure.
CORRECT: “Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance” is the correct answer.
INCORRECT: “Move the configuration file to an Amazon S3 bucket. Create an IAM role with permission to the bucket and attach it to the EC2 instance” is incorrect. This just relocates the file; the contents are still unsecured and must be loaded by the application and passed to RDS. This is an insecure process.
INCORRECT: “Attach an additional volume to the EC2 instance with encryption enabled. Move the configuration file to the encrypted volume” is incorrect. This will only encrypt the file at rest, it still must be read, and the contents passed to RDS which is insecure.
INCORRECT: “Install an Amazon-trusted root certificate on the application instance and use SSL/TLS encrypted connections to the database” is incorrect. The file is still unsecured on the EBS volume so encrypting the credentials in an encrypted channel between the EC2 instance and RDS does not solve all security issues.
Question 4:
A company runs an application on six web application servers in an Amazon EC2 Auto Scaling group in a single Availability Zone. The application is fronted by an Application Load Balancer (ALB). A Solutions Architect needs to modify the infrastructure to be highly available without making any modifications to the application.
Which architecture should the Solutions Architect choose to enable high availability?
A. Modify the Auto Scaling group to use two instances across each of three Availability Zones
B. Create a launch template that can be used to quickly create more instances in another Region
C. Create an Amazon CloudFront distribution with a custom origin across multiple Regions
D. Create an Auto Scaling group to launch three instances across each of two Regions
Explanation
The only thing that needs to be changed in this scenario to enable HA is to split the instances across multiple Availability Zones. The architecture already uses Auto Scaling and Elastic Load Balancing so there is plenty of resilience to failure. Once the instances are running across multiple AZs there will be AZ-level fault tolerance as well.
CORRECT: “Modify the Auto Scaling group to use two instances across each of three Availability Zones” is the correct answer.
INCORRECT: “Create an Amazon CloudFront distribution with a custom origin across multiple Regions” is incorrect. CloudFront is not used to create HA for your application, it is used to accelerate access to media content.
INCORRECT: “Create a launch template that can be used to quickly create more instances in another Region” is incorrect. Multi-AZ should be enabled rather than multi-Region.
INCORRECT: “Create an Auto Scaling group to launch three instances across each of two Regions” is incorrect. HA can be achieved within a Region by simply enabling more AZs in the ASG. An ASG cannot launch instances in multiple Regions.
Question 5: Correct
A company has uploaded some highly critical data to an Amazon S3 bucket. Management are concerned about data availability and require that steps are taken to protect the data from accidental deletion. The data should still be accessible, and a user should be able to delete the data intentionally.
Which combination of steps should a solutions architect take to accomplish this? (Select TWO.)
A. Enable MFA Delete on the S3 bucket
B. Enable versioning on the S3 bucket
C. Create a lifecycle policy for the objects in the S3 bucket
D. Enable default encryption on the S3 bucket
E. Create a bucket policy on the S3 bucket
Explanation
Multi-factor authentication (MFA) delete adds an additional step before an object can be deleted from a versioning-enabled bucket.
With MFA delete the bucket owner must include the x-amz-mfa request header in requests to permanently delete an object version or change the versioning state of the bucket.
CORRECT: “Enable versioning on the S3 bucket” is a correct answer.
CORRECT: “Enable MFA Delete on the S3 bucket” is also a correct answer.
INCORRECT: “Create a bucket policy on the S3 bucket” is incorrect. A bucket policy is not required to enable MFA delete.
INCORRECT: “Enable default encryption on the S3 bucket” is incorrect. Encryption does not protect against deletion.
INCORRECT: “Create a lifecycle policy for the objects in the S3 bucket” is incorrect. A lifecycle policy will move data to another storage class but does not protect against deletion
Question 6: Correct
An Amazon VPC contains several Amazon EC2 instances. The instances need to make API calls to Amazon DynamoDB. A solutions architect needs to ensure that the API calls do not traverse the internet.
How can this be accomplished? (Select TWO.)
A. Create a route table entry for the endpoint
B. Create a gateway endpoint for DynamoDB
C. Create a new DynamoDB table that uses the endpoint
D. Create an ENI for the endpoint in each of the subnets of the VPC
E. Create a VPC peering connection between the VPC and DynamoDB
Explanation
Amazon DynamoDB and Amazon S3 support gateway endpoints, not interface endpoints. With a gateway endpoint you create the endpoint in the VPC, attach a policy allowing access to the service, and then specify the route table to create a route table entry in.
CORRECT: “Create a route table entry for the endpoint” is a correct answer.
CORRECT: “Create a gateway endpoint for DynamoDB” is also a correct answer.
INCORRECT: “Create a new DynamoDB table that uses the endpoint” is incorrect as it is not necessary to create a new DynamoDB table.
INCORRECT: “Create an ENI for the endpoint in each of the subnets of the VPC” is incorrect as an ENI is used by an interface endpoint, not a gateway endpoint.
INCORRECT: “Create a VPC peering connection between the VPC and DynamoDB” is incorrect as you cannot create a VPC peering connection between a VPC and a public AWS service as public services are outside of VPCs.
Question 7: Correct
A company hosts a multiplayer game on AWS. The application uses Amazon EC2 instances in a single Availability Zone and users connect over Layer 4. Solutions Architect has been tasked with making the architecture highly available and also more cost-effective.
How can the solutions architect best meet these requirements? (Select TWO.)
A. Increase the number of instances and use smaller EC2 instance types
B. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically
C. Configure a Network Load Balancer in front of the EC2 instances
D. Configure an Application Load Balancer in front of the EC2 instances
E. Configure an Auto Scaling group to add or remove instances in the Availability Zone automatically
Explanation
The solutions architect must enable high availability for the architecture and ensure it is cost-effective. To enable high availability an Amazon EC2 Auto Scaling group should be created to add and remove instances across multiple availability zones.
In order to distribute the traffic to the instances the architecture should use a Network Load Balancer which operates at Layer 4. This architecture will also be cost-effective as the Auto Scaling group will ensure the right number of instances are running based on demand.
CORRECT: “Configure a Network Load Balancer in front of the EC2 instances” is a correct answer.
CORRECT: “Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically” is also a correct answer.
INCORRECT: “Increase the number of instances and use smaller EC2 instance types” is incorrect as this is not the most cost-effective option. Auto Scaling should be used to maintain the right number of active instances.
INCORRECT: “Configure an Auto Scaling group to add or remove instances in the Availability Zone automatically” is incorrect as this is not highly available as it’s a single AZ.
INCORRECT: “Configure an Application Load Balancer in front of the EC2 instances” is incorrect as an ALB operates at Layer 7 rather than Layer 4.
Question 8:
A company runs an application on an Amazon EC2 instance the requires 250 GB of storage space. The application is not used often and has small spikes in usage on weekday mornings and afternoons. The disk I/O can vary with peaks hitting a maximum of 3,000 IOPS. A Solutions Architect must recommend the most cost-effective storage solution that delivers the performance required.
Which configuration should the Solutions Architect recommend?
Which solution should the solutions architect recommend?
A. Amazon EBS General Purpose SSD (gp2)
B. Amazon EBS Provisioned IOPS SSD (io1)
C. Amazon EBS Throughput Optimized HDD (st1)
D. Amazon EBS Cold HDD (sc1)
Explanation
General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time.
Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 16,000 IOPS (at 5,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size. AWS designs gp2 volumes to deliver their provisioned performance 99% of the time. A gp2 volume can range in size from 1 GiB to 16 TiB.
In this configuration the volume will provide a baseline performance of 750 IOPS but will always be able to burst to the required 3,000 IOPS during periods of increased traffic.
CORRECT: “Amazon EBS General Purpose SSD (gp2)” is the correct answer.
INCORRECT: “Amazon EBS Provisioned IOPS SSD (i01)” is incorrect. The i01 volume type will be more expensive and is not necessary for the performance levels required.
INCORRECT: “Amazon EBS Cold HDD (sc1)” is incorrect. The sc1 volume type is not going to deliver the performance requirements as it cannot burst to 3,000 IOPS.
INCORRECT: “Amazon EBS Throughput Optimized HDD (st1)” is incorrect. The st1 volume type is not going to deliver the performance requirements as it cannot burst to 3,000 IOPS.
Question 9:
A legacy tightly-coupled High Performance Computing (HPC) application will be migrated to AWS. Which network adapter type should be used?
A. Elastic Network Adapater (ENA)
B. Elastic Fabric Adapter (EFA)
C. Elastic IP Address
D. Elastic Network Interface (ENI)
Explanation
An Elastic Fabric Adapter is an AWS Elastic Network Adapter (ENA) with added capabilities. The EFA lets you apply the scale, flexibility, and elasticity of the AWS Cloud to tightly-coupled HPC apps. It is ideal for tightly coupled app as it uses the Message Passing Interface (MPI).
CORRECT: “Elastic Fabric Adapter (EFA)” is the correct answer.
INCORRECT: “Elastic Network Interface (ENI)” is incorrect. The ENI is a basic type of adapter and is not the best choice for this use case.
INCORRECT: “Elastic Network Adapter (ENA)” is incorrect. The ENA, which provides Enhanced Networking, does provide high bandwidth and low inter-instance latency but it does not support the features for a tightly-coupled app that the EFA does.
INCORRECT: “Elastic IP Address” is incorrect. An Elastic IP address is just a static public IP address, it is not a type of network adapter.
Question 10:
An application is being created that will use Amazon EC2 instances to generate and store data. Another set of EC2 instances will then analyze and modify the data. Storage requirements will be significant and will continue to grow over time. The application architects require a storage solution.
Which actions would meet these needs?
A. Store the data in Amazon S3 Glacier. Update the vault policy to allow access to the application instances
B. Store the data in an Amazon EFS filesystem. Mount the file system on the application instances
C. Store the data in AWS Storage Gateway. Setup AWS Direct Connect between the Gateway appliance and the EC2 instances
D. Store the data in an Amazon EBS volume. Mount the EBS volume on the application instances
Explanation
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
Amazon EFS supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, providing a common data source for workloads and applications running on more than one instance or server.
For this scenario, EFS is a great choice as it will provide a scalable file system that can be mounted by multiple EC2 instances and accessed simultaneously.
CORRECT: “Store the data in an Amazon EFS filesystem. Mount the file system on the application instances” is the correct answer.
INCORRECT: “Store the data in an Amazon EBS volume. Mount the EBS volume on the application instances” is incorrect. Though there is a new feature that allows (EBS multi-attach) that allows attaching multiple Nitro instances to a volume, this is not on the exam yet, and has some specific constraints.
INCORRECT: “Store the data in Amazon S3 Glacier. Update the vault policy to allow access to the application instances” is incorrect as S3 Glacier is not a suitable storage location for live access to data, it is used for archival.
INCORRECT: “Store the data in AWS Storage Gateway. Setup AWS Direct Connect between the Gateway appliance and the EC2 instances” is incorrect. There is no reason to store the data on-premises in a Storage Gateway, using EFS is a much better solution.
Question 13:
A company runs an application that uses an Amazon RDS PostgreSQL database. The database is currently not encrypted. A Solutions Architect has been instructed that due to new compliance requirements all existing and new data in the database must be encrypted. The database experiences high volumes of changes and no data can be lost.
How can the Solutions Architect enable encryption for the database without incurring any data loss?
A. Update the RDS DB to Multi-AZ mode and enable encryption for the standby replica. Perform a failover to the standby instance and then delete the unencrypted RDS DB instance.
B. Create a snapshot of the existing RDS DB instance. Create an encrypted copy of the snapshot. Create a new RDS DB instance from the encrypted snapshot and update the application. Use AWS DMS to synchronize data between the source and destination RDS DBs.
C. Create an RDS read replica and specify an encryption key. Promote the encrypted read replica to primary. Update the application to point to the new RDS DB endpoint.
D. Create a snapshot of the existing RDS DB instance. Create an encrypted copy of the snapshot. Create a new RDS DB instance from the encrypted snapshot. Configure the application to use the new DB endpoint.
Explanation
You cannot change the encryption status of an existing RDS DB instance. Encryption must be specified when creating the RDS DB instance. The best way to encrypt an existing database is to take a snapshot, encrypt a copy of the snapshot and restore the snapshot to a new RDS DB instance. This results in an encrypted database that is a new instance. Applications must be updated to use the new RDS DB endpoint.
In this scenario as there is a high rate of change, the databases will be out of sync by the time the new copy is created and is functional. The best way to capture the changes between the source (unencrypted) and destination (encrypted) DB is to use AWS Database Migration Service (DMS) to synchronize the data.
The slide below depicts the process for encrypting an unencrypted RDS DB instance:
CORRECT: “Create a snapshot of the existing RDS DB instance. Create an encrypted copy of the snapshot. Create a new RDS DB instance from the encrypted snapshot and update the application. Use AWS DMS to synchronize data between the source and destination RDS DBs” is the correct answer.
INCORRECT: “Create a snapshot of the existing RDS DB instance. Create an encrypted copy of the snapshot. Create a new RDS DB instance from the encrypted snapshot. Configure the application to use the new DB endpoint” is incorrect. This answer creates an encrypted DB instance but does not synchronize the data.
INCORRECT: “Create an RDS read replica and specify an encryption key. Promote the encrypted read replica to primary. Update the application to point to the new RDS DB endpoint” is incorrect. You cannot create an encrypted read replica of an unencrypted RDS DB. The read replica will always have the same encryption status as the RDS DB it is created from.
INCORRECT: “Update the RDS DB to Multi-AZ mode and enable encryption for the standby replica. Perform a failover to the standby instance and then delete the unencrypted RDS DB instance” is incorrect. You also cannot have an encrypted Multi-AZ standby instance of an unencrypted RDS DB.
Question 14:
A Microsoft Windows file server farm uses Distributed File System Replication (DFSR) to synchronize data in an on-premises environment. The infrastructure is being migrated to the AWS Cloud.
Which service should the solutions architect use to replace the file server farm?
A. Amazon EBS
B. Amazon FSx
C. AWS Storage Gateway
D. Amazon EFS
Explanation
Amazon FSx for Windows file server supports DFS namespaces and DFS replication. This is the best solution for replacing the on-premises infrastructure. Note the limitations for deployment:
CORRECT: “Amazon FSx” is the correct answer.
INCORRECT: “Amazon EFS” is incorrect. You cannot replace a Windows file server farm with EFS as it uses a completely different protocol.
INCORRECT: “Amazon EBS” is incorrect. Amazon EBS provides block-based volumes that are attached to EC2 instances. It cannot be used for replacing a shared Windows file server farm using DFSR.
INCORRECT: “AWS Storage Gateway” is incorrect. This service is used for providing cloud storage solutions for on-premises servers. In this case the infrastructure is being migrated into the AWS Cloud.
Question 16:
A company is migrating from an on-premises infrastructure to the AWS Cloud. One of the company’s applications stores files on a Windows file server farm that uses Distributed File System Replication (DFSR) to keep data in sync. A solutions architect needs to replace the file server farm.
Which service should the solutions architect use?
A. Amazon EFS
B. Amazon Storage Gateway
C. Amazon FSx
D. Amazon S3
Explanation
Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol.
Amazon FSx is built on Windows Server and provides a rich set of administrative features that include end-user file restore, user quotas, and Access Control Lists (ACLs).
Additionally, Amazon FSX for Windows File Server supports Distributed File System Replication (DFSR) in Single-AZ deployments as can be seen in the feature comparison table below.
CORRECT: “Amazon FSx” is the correct answer.
INCORRECT: “Amazon EFS” is incorrect as EFS only supports Linux systems.
INCORRECT: “Amazon S3” is incorrect as this is not a suitable replacement for a Microsoft filesystem.
INCORRECT: “AWS Storage Gateway” is incorrect as this service is primarily used for connecting on-premises storage to cloud storage. It consists of a software device installed on-premises and can be used with SMB shares but it actually stores the data on S3. It is also used for migration. However, in this case the company need to replace the file server farm and Amazon FSx is the best choice for this job.
Question 19:
A solutions architect is creating a system that will run analytics on financial data for several hours a night, 5 days a week. The analysis is expected to run for the same duration and cannot be interrupted once it is started. The system will be required for a minimum of 1 year.
What should the solutions architect configure to ensure the EC2 instances are available when they are needed?
A. Regional Reserved Instances
B. On-Demand Instances
C. Savings Plans
D. On-Demand Capacity Reservations
Explanation
On-Demand Capacity Reservations enable you to reserve compute capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. This gives you the ability to create and manage Capacity Reservations independently from the billing discounts offered by Savings Plans or Regional Reserved Instances.
By creating Capacity Reservations, you ensure that you always have access to EC2 capacity when you need it, for as long as you need it. You can create Capacity Reservations at any time, without entering a one-year or three-year term commitment, and the capacity is available immediately.
The table below shows the difference between capacity reservations and other options:
CORRECT: “On-Demand Capacity Reservations” is the correct answer.
INCORRECT: “Regional Reserved Instances” is incorrect. This type of reservation does not reserve capacity.
INCORRECT: “On-Demand Instances” is incorrect. This does not provide any kind of capacity reservation.
INCORRECT: “Savings Plans” is incorrect. This pricing option does not provide a capacity reservation.
Question 21:
An organization want to share regular updates about their charitable work using static webpages. The pages are expected to generate a large amount of views from around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution.
Which action should the solutions architect take to accomplish this?
A. Use cross-Region replication to all Regions
B. Generate presigned URLs for the files
C. Use the geopromixity feature of Amazon Route 53
D. Use Amazon CloudFront with the S3 bucket as its origin
Explanation
Amazon CloudFront can be used to cache the files in edge locations around the world and this will improve the performance of the webpages.
To serve a static website hosted on Amazon S3, you can deploy a CloudFront distribution using one of these configurations:
Using a REST API endpoint as the origin with access restricted by an origin access identity (OAI)
Using a website endpoint as the origin with anonymous (public) access allowed
Using a website endpoint as the origin with access restricted by a Referer header
CORRECT: “Use Amazon CloudFront with the S3 bucket as its origin” is the correct answer.
INCORRECT: “Generate presigned URLs for the files” is incorrect as this is used to restrict access which is not a requirement.
INCORRECT: “Use cross-Region replication to all Regions” is incorrect as this does not provide a mechanism for directing users to the closest copy of the static webpages.
INCORRECT: “Use the geoproximity feature of Amazon Route 53” is incorrect as this does not include a solution for having multiple copies of the data in different geographic lcoations.
Question 22:
Amazon EC2 instances in a development environment run between 9am and 5pm Monday-Friday. Production instances run 24/7. Which pricing models should be used? (choose 2)
A. On-demand capacity reservations for the development environment
B. Use Spot instances for the development environment
C. Use On-Demand instances for the production environment
D. Use Reserved instances for the development environment
E. Use Reserved instances for the production environment
Explanation
Capacity reservations have no commitment and can be created and canceled as needed. This is ideal for the development environment as it will ensure the capacity is available. There is no price advantage but none of the other options provide a price advantage whilst also ensuring capacity is available
Reserved instances are a good choice for workloads that run continuously. This is a good option for the production environment.
CORRECT: “On-demand capacity reservations for the development environment” is a correct answer.
CORRECT: “Use Reserved instances for the production environment” is also a correct answer.
INCORRECT: “Use Spot instances for the development environment” is incorrect. Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. Spot instances are not suitable for the development environment as important work may be interrupted.
INCORRECT: “Use Reserved instances for the development environment” is incorrect as they require a long-term commitment which is not ideal for a development environment.
INCORRECT: “Use On-Demand instances for the production environment” is incorrect. There is no long-term commitment required when you purchase On-Demand Instances. However, you do not get any discount and therefore this is the most expensive option.
Question 23:
A company uses an Amazon RDS MySQL database instance to store customer order data. The security team have requested that SSL/TLS encryption in transit must be used for encrypting connections to the database from application servers. The data in the database is currently encrypted at rest using an AWS KMS key.
How can a Solutions Architect enable encryption in transit?
A. Enable encryption in transit using RDS Management console and obtain a key using AWS KMS
B. Take a snapshot of the RDS instance. Restore the snapshot to a new instance with encryption in transit enabled.
C. Download the AWS-provided root certificates. Use the certificates when connecting to the RDS DB instance.
D. Add a self-signed certificate to the RDS DB instance. Use the certificates in all connections to the RDS DB instance.
Explanation
Amazon RDS creates an SSL certificate and installs the certificate on the DB instance when Amazon RDS provisions the instance. These certificates are signed by a certificate authority. The SSL certificate includes the DB instance endpoint as the Common Name (CN) for the SSL certificate to guard against spoofing attacks.
You can download a root certificate from AWS that works for all Regions or you can download Region-specific intermediate certificates.
CORRECT: “Download the AWS-provided root certificates. Use the certificates when connecting to the RDS DB instance” is the correct answer.
INCORRECT: “Take a snapshot of the RDS instance. Restore the snapshot to a new instance with encryption in transit enabled” is incorrect. There is no need to do this as a certificate is created when the DB instances is launched.
INCORRECT: “Enable encryption in transit using the RDS Management console and obtain a key using AWS KMS” is incorrect. You cannot enable/disable encryption in transit using the RDS management console or use a KMS key.
INCORRECT: “Add a self-signed certificate to the RDS DB instance. Use the certificates in all connections to the RDS DB instance” is incorrect. You cannot use self-signed certificates with RDS.
Question 24:
A company runs a dynamic website that is hosted on an on-premises server in the United States. The company is expanding to Europe and is investigating how they can optimize the performance of the website for European users. The website’s backed must remain in the United States. The company requires a solution that can be implemented within a few days.
What should a Solutions Architect recommend?
A. Migrate the website to Amazon S3. Use cross-Region replication between Regions and latency-based Route 53 policy.
B. Launch an Amazon EC2 instance in an AWS Region in the United States and migrate the website to it.
C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers.
D. Use Amazon CloudFront with Lambda@Edge to direct traffic to an on-premises origin.
Explanation
A custom origin can point to an on-premises server and CloudFront is able to cache content for dynamic websites. CloudFront can provide performance optimizations for custom origins even if they are running on on-premises servers. These include persistent TCP connections to the origin, SSL enhancements such as Session tickets and OCSP stapling.
Additionally, connections are routed from the nearest Edge Location to the user across the AWS global network. If the on-premises server is connected via a Direct Connect (DX) link this can further improve performance.
CORRECT: “Use Amazon CloudFront with a custom origin pointing to the on-premises servers” is the correct answer.
INCORRECT: “Use Amazon CloudFront with Lambda@Edge to direct traffic to an on-premises origin” is incorrect. Lambda@Edge is not used to direct traffic to on-premises origins.
INCORRECT: “Launch an Amazon EC2 instance in an AWS Region in the United States and migrate the website to it” is incorrect. This would not necessarily improve performance for European users.
INCORRECT: “Migrate the website to Amazon S3. Use cross-Region replication between Regions and a latency-based Route 53 policy” is incorrect. You cannot host dynamic websites on Amazon S3 (static only).
Question 25:
A persistent database must be migrated from an on-premises server to an Amazon EC2 instances. The database requires 64,000 IOPS and, if possible, should be stored on a single Amazon EBS volume.
Which solution should a Solutions Architect recommend?
A. Create an Amazon EC2 instance with four Amazon EBS General Purposes SSD (gp2) volumes attached. Max out the IOPS on each volume and use a RAID 0 stripe set
B. Create an Amazon EC2 instance with two Amazon EBS provisioned IOPS SSD (io1) volumes attached. Provision 32,000 IOPS per volume and create a logical volume using the OS that aggregates the capacity.
C. Use an instance from the I3 I/O optimized family and leverage instance store storage to achieve the IOPS requirement.
D. Create a Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (io1) volume attached. Provision 64,000 IOPS for the volume.
Explanation
Amazon EC2 Nitro-based systems are not required for this solution but do offer advantages in performance that will help to maximize the usage of the EBS volume. For the data storage volume an i01 volume can support up to 64,000 IOPS so a single volume with sufficient capacity (50 IOPS per GiB) can be deliver the requirements.
The current list of EBS volume types is in the table below:
CORRECT: “Create a Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (i01) volume attached. Provision 64,000 IOPS for the volume” is the correct answer.
INCORRECT: “Use an instance from the I3 I/O optimized family and leverage instance store storage to achieve the IOPS requirement” is incorrect.
INCORRECT: “Create an Amazon EC2 instance with four Amazon EBS General Purpose SSD (gp2) volumes attached. Max out the IOPS on each volume and use a RAID 0 stripe set” is incorrect. This is not a good use case for gp2 volumes. It is much better to use io1 which also meets the requirement of having a single volume with 64,000 IOPS.
INCORRECT: “Create an Amazon EC2 instance with two Amazon EBS Provisioned IOPS SSD (i01) volumes attached. Provision 32,000 IOPS per volume and create a logical volume using the OS that aggregates the capacity” is incorrect. There is no need to create two volumes and aggregate capacity through the OS, the Solutions Architect can simply create a single volume with 64,000 IOPS.