Set 2 Kindle SAA-003 Practice Test Flashcards
A group of business analysts perform read-only SQL queries on an Amazon RDS database. The queries have become quite numerous and the database has experienced some performance degradation. The queries must be run against the latest data. A Solutions Architect must solve the performance problems with minimal changes to the existing web application. What should the Solutions Architect recommend?
A. Export the data to Amazon S3 and instruct the business analysts to run their queries using Amazon Athena.
B. Load the data into an Amazon Redshift cluster and instruct the business analysts to run their queries against the cluster.
C. Load the data into Amazon ElastiCache and instruct the business analysts to run their queries against the ElastiCache endpoint.
D. Create a read replica of the primary database and instruct the business analysts to direct queries to the replica.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 198). Kindle Edition.
D. Create a read replica of the primary database and instruct the business analysts to direct queries to the replica.
Explanation:
The performance issues can be easily resolved by offloading the SQL queries the business analysts are performing to a read replica. This ensures that data that is being queries is up to date and the existing web application does not require any modifications to take place. CORRECT: “Create a read replica of the primary database and instruct the business analysts to direct queries to the replica” is the correct answer. INCORRECT: “Export the data to Amazon S3 and instruct the business analysts to run their queries using Amazon Athena” is incorrect. The data must be the latest data and this method would therefore require constant exporting of the data. INCORRECT: “Load the data into an Amazon Redshift cluster and instruct the business analysts to run their queries against the cluster” is incorrect. This is another solution that requires exporting the loading the data which means over time it will become out of date. INCORRECT: “Load the data into Amazon ElastiCache and instruct the business analysts to run their queries against the ElastiCache endpoint” is incorrect. It will be much easier to create a read replica. ElastiCache requires updates to the application code so should be avoided in this example.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 198-199). Kindle Edition.
A company is planning to upload a large quantity of sensitive data to Amazon S3. The company’s security department require that the data is encrypted before it is uploaded. Which option meets these requirements?
A. Use server-side encryption with customer-provided encryption keys.
B. Use client-side encryption with a master key stored in AWS KMS.
C. Use client-side encryption with Amazon S3 managed encryption keys.
D. Use server-side encryption with keys stored in KMS.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 199-200). Kindle Edition.
B. Use client-side encryption with a master key stored in AWS KMS.
Explanation:
The requirement is that the objects must be encrypted before they are uploaded. The only option presented that meets this requirement is to use client-side encryption. You then have two options for the keys you use to perform the encryption: Use a customer master key (CMK) stored in AWS Key Management Service (AWS KMS). Use a master key that you store within your application. In this case the correct answer is to use an AWS KMS key. Note that you cannot use client-side encryption with keys managed by Amazon S3. CORRECT: “Use client-side encryption with a master key stored in AWS KMS” is the correct answer. INCORRECT: “Use client-side encryption with Amazon S3 managed encryption keys” is incorrect. You cannot use S3 managed keys with client-side encryption. INCORRECT: “Use server-side encryption with customer-provided encryption keys” is incorrect. With this option the encryption takes place after uploading to S3. INCORRECT: “Use server-side encryption with keys stored in KMS” is incorrect. With this option the encryption takes place after uploading to S3.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 200-201). Kindle Edition.
An application running on Amazon ECS processes data and then writes objects to an Amazon S3 bucket. The application requires permissions to make the S3 API calls. How can a Solutions Architect ensure the application has the required permissions?
A. Update the S3 policy in IAM to allow read/write access from Amazon ECS, and then relaunch the container.
B. Create a set of Access Keys with read/write permissions to the bucket and update the task credential ID. C. Create an IAM role that has read/write permissions to the bucket and update the task definition to specify the role as the taskRoleArn.
D. Attach an IAM policy with read/write permissions to the bucket to an IAM group and add the container instances to the group.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 202). Kindle Edition.
C. Create an IAM role that has read/write permissions to the bucket and update the task definition to specify the role as the taskRoleArn.
Explanation:
With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task. Applications must sign their AWS API requests with AWS credentials, and this feature provides a strategy for managing credentials for your applications to use, similar to the way that Amazon EC2 instance profiles provide credentials to EC2 instances. You define the IAM role to use in your task definitions, or you can use ataskRoleArnoverride when running a task manually with theRunTaskAPI operation. Note that there are instances roles and task roles that you can assign in ECS when using the EC2 launch type. The task role is better when you need to assign permissions for just that specific task: CORRECT: “Create an IAM role that has read/write permissions to the bucket and update the task definition to specify the role as the taskRoleArn” is the correct answer. INCORRECT: “Update the S3 policy in IAM to allow read/write access from Amazon ECS, and then relaunch the container” is incorrect. Policies must be assigned to tasks using IAM Roles and this is not mentioned here. INCORRECT: “Create a set of Access Keys with read/write permissions to the bucket and update the task credential ID” is incorrect. You cannot update the task credential ID with access keys and roles should be used instead. INCORRECT: “Attach an IAM policy with read/write permissions to the bucket to an IAM group and add the container instances to the group” is incorrect. You cannot add container instances to an IAM group.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 202-203). Kindle Edition.
An application upgrade caused some issues with stability. The application owner enabled logging and has generated a 5 GB log file in an Amazon S3 bucket. The log file must be securely shared with the application vendor to troubleshoot the issues. What is the MOST secure way to share the log file?
A. Create access keys using an administrative account and share the access key ID and secret access key with the vendor.
B. Enable default encryption for the bucket and public access. Provide the S3 URL of the file to the vendor. C. Create an IAM user for the vendor to provide access to the S3 bucket and the application. Enforce multi-factor authentication.
D. Generate a presigned URL and ask the vendor to download the log file before the URL expires.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 204). Kindle Edition.
D. Generate a presigned URL and ask the vendor to download the log file before the URL expires.
Explanation:
A presigned URL gives you access to the object identified in the URL. When you create a presigned URL, you must provide your security credentials and then specify a bucket name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The presigned URLs are valid only for the specified duration. That is, you must start the action before the expiration date and time. This is the most secure way to provide the vendor with time-limited access to the log file in the S3 bucket. CORRECT: “Generate a presigned URL and ask the vendor to download the log file before the URL expires” is the correct answer. INCORRECT: “Create an IAM user for the vendor to provide access to the S3 bucket and the application. Enforce multi-factor authentication” is incorrect. This is less secure as you have to create an account to access AWS and then ensure you lock down the account appropriately. INCORRECT: “Create access keys using an administrative account and share the access key ID and secret access key with the vendor” is incorrect. This is extremely insecure as the access keys will provide administrative permissions to AWS and should never be shared. INCORRECT: “Enable default encryption for the bucket and public access. Provide the S3 URL of the file to the vendor” is incorrect. Encryption does not assist here as the bucket would be public and anyone could access it.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 204-205). Kindle Edition.
A company has a file share on a Microsoft Windows Server in an on-premises data center. The server uses a local network attached storage (NAS) device to store several terabytes of files. The management team require a reduction in the data center footprint and to minimize storage costs by moving on-premises storage to AWS. What should a Solutions Architect do to meet these requirements
A. Create an Amazon EFS volume and use an IPSec VPN.
B. Configure an AWS Storage Gateway file gateway.
C. Create an Amazon S3 bucket and an S3 gateway endpoint.
D. Configure an AWS Storage Gateway as a volume gateway.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 205-206). Kindle Edition.
B. Configure an AWS Storage Gateway file gateway.
Explanation:
An AWS Storage Gateway File Gateway provides your applications a file interface to seamlessly store files as objects in Amazon S3, and access them using industry standard file protocols. This removes the files from the on-premises NAS device and provides a method of directly mounting the file share for on-premises servers and clients. CORRECT: “Configure an AWS Storage Gateway file gateway” is the correct answer. INCORRECT: “Configure an AWS Storage Gateway as a volume gateway” is incorrect. A volume gateway uses block-based protocols. In this case we are replacing a NAS device which uses file-level protocols so the best option is a file gateway. INCORRECT: “Create an Amazon EFS volume and use an IPSec VPN” is incorrect. EFS can be mounted over a VPN but it would have more latency than using a storage gateway. INCORRECT: “Create an Amazon S3 bucket and an S3 gateway endpoint” is incorrect. S3 is an object-level storage system so is not suitable for this use case. A gateway endpoint is a method of accessing S3 using private addresses from your VPC, not from your data center.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 206-207). Kindle Edition.
A company uses a Microsoft Windows file share for storing documents and media files. Users access the share using Microsoft Windows clients and are authenticated using the company’s Active Directory. The chief information officer wants to move the data to AWS as they are approaching capacity limits. The existing user authentication and access management system should be used. How can a Solutions Architect meet these requirements?
A. Move the documents and media files to an Amazon FSx for Windows File Server file system.
B. Move the documents and media files to an Amazon Elastic File System and use POSIX permissions.
C. Move the documents and media files to an Amazon FSx for Lustre file system.
D. Move the documents and media files to an Amazon Simple Storage Service bucket and apply bucket ACLs.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 207). Kindle Edition.
A. Move the documents and media files to an Amazon FSx for Windows File Server file system.
Explanation:
Amazon FSx for Windows File Server makes it easy for you to launch and scale reliable, performant, and secure shared file storage for your applications and end users. With Amazon FSx, you can launch highly durable and available file systems that can span multiple availability zones (AZs) and can be accessed from up to thousands of compute instances using the industry-standard Server Message Block (SMB) protocol. It provides a rich set of administrative and security features, and integrates with Microsoft Active Directory (AD). To serve a wide spectrum of workloads, Amazon FSx provides high levels of file system throughput and IOPS and consistent sub-millisecond latencies. You can also mount FSx file systems from on-premises using a VPN or Direct Connect connection. This topology is depicted in the image below: CORRECT: “Move the documents and media files to an Amazon FSx for Windows File Server file system” is the correct answer. INCORRECT: “Move the documents and media files to an Amazon FSx for Lustre file system” is incorrect. FSx for Lustre is not suitable for migrating a Microsoft Windows File Server implementation. INCORRECT: “Move the documents and media files to an Amazon Elastic File System and use POSIX permissions” is incorrect. EFS can be used from on-premises over a VPN or DX connection but POSIX permissions are very different to Microsoft permissions and mean a different authentication and access management solution is required. INCORRECT: “Move the documents and media files to an Amazon Simple Storage Service bucket and apply bucket ACLs” is incorrect. S3 with bucket ACLs would be changing to an object-based storage system and a completely different authentication and access management solution.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 208-209). Kindle Edition.
A company requires a solution for replicating data to AWS for disaster recovery. Currently, the company uses scripts to copy data from various sources to a Microsoft Windows file server in the on-premises data center. The company also requires that a small amount of recent files are accessible to administrators with low latency. What should a Solutions Architect recommend to meet these requirements?
A. Update the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance instead of the on-premises file server.
B. Update the script to copy data to an Amazon EBS volume instead of the on-premises file server.
C. Update the script to copy data to an Amazon EFS volume instead of the on-premises file server.
D. Update the script to copy data to an Amazon S3 Glacier archive instead of the on-premises file server.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 212). Kindle Edition.
A. Update the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance instead of the on-premises file server.
Explanation:
The best solution here is to use an AWS Storage Gateway File Gateway virtual appliance in the on-premises data center. This can be accessed the same protocols as the existing Microsoft Windows File Server (SMB/CIFS). Therefore, the script simply needs to be updated to point to the gateway. The file gateway will then store data on Amazon S3 and has a local cache for data that can be accessed at low latency. The file gateway provides an excellent method of enablingfile protocol access to low cost S3 object storage. CORRECT: “Update the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance instead of the on-premises file server” is the correct answer. INCORRECT: “Update the script to copy data to an Amazon EBS volume instead of the on-premises file server” is incorrect. This would also need an attached EC2 instance running Windows to be able to mount using the same protocols and would not offer any local low-latency access. INCORRECT: “Update the script to copy data to an Amazon EFS volume instead of the on-premises file server” is incorrect. This solution would not provide a local cache. INCORRECT: “Update the script to copy data to an Amazon S3 Glacier archive instead of the on-premises file server” is incorrect. This would not provide any immediate access with low-latency.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 212-214). Kindle Edition.
A company runs an application in an Amazon VPC that requires access to an Amazon Elastic Container Service (Amazon ECS) cluster that hosts an application in another VPC. The company’s security team requires that all traffic must not traverse the internet. Which solution meets this requirement?
A. Create a Network Load Balancer and AWS PrivateLink endpoint for Amazon ECS in the VPC that hosts the ECS cluster.
B. Configure a gateway endpoint for Amazon ECS. Update the route table to include an entry pointing to the ECS cluster.
C. Configure an Amazon Route 53 private hosted zone for each VPC. Use private records to resolve internal IP addresses in each VPC.
D. Create a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 214). Kindle Edition.
D. Create a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC.
Explanation:
The correct solution is to use AWS PrivateLink in a service provider model. In this configuration a network load balancer will be implemented in the service provider VPC (the one with the ECS cluster in this example), and a PrivateLink endpoint will be created in the consumer VPC (the one with the company’s application). CORRECT: “Create a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC” is the correct answer. INCORRECT: “Create a Network Load Balancer and AWS PrivateLink endpoint for Amazon ECS in the VPC that hosts the ECS cluster” is incorrect. The endpoint should be in the consumer VPC, not the service provider VPC (see the diagram above). INCORRECT: “Configure a gateway endpoint for Amazon ECS. Update the route table to include an entry pointing to the ECS cluster” is incorrect. You cannot use a gateway endpoint to connect to a private service. Gateway endpoints are only for S3 and DynamoDB. INCORRECT: “Configure an Amazon Route 53 private hosted zone for each VPC. Use private records to resolve internal IP addresses in each VPC” is incorrect. This does not provide a mechanism for resolving each other’s addresses and there’s no method of internal communication using private IPs such as VPC peering.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 214-216). Kindle Edition.
An application stores transactional data in an Amazon S3 bucket. The data is analyzed for the first week and then must remain immediately available for occasional analysis. What is the MOST cost-effective storage solution that meets the requirements?
A. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days.
B. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 7 days.
C. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
D. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 216). Kindle Edition.
C. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
Explanation:
The transition should be to Standard-IA rather than One Zone-IA. Though One Zone-IA would be cheaper, it also offers lower availability and the question states the objects “must remain immediately available”. Therefore the availability is a consideration. Though there is no minimum duration when storing data in S3 Standard, you cannot transition to Standard IA within 30 days. This can be seen when trying to create a lifecycle rule: Therefore, the best solution is to transition after 30 days. CORRECT: “Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days” is the correct answer. INCORRECT: “Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days” is incorrect as explained above. INCORRECT: “Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days” is incorrect as explained above. INCORRECT: “Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 7 days” is incorrect as explained above.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 216-217). Kindle Edition.
A highly sensitive application runs on Amazon EC2 instances using EBS volumes. The application stores data temporarily on Amazon EBS volumes during processing before saving results to an Amazon RDS database. The company’s security team mandate that the sensitive data must be encrypted at rest. Which solution should a Solutions Srchitect recommend to meet this requirement?
A. Configure encryption for the Amazon EBS volumes and Amazon RDS database with AWS KMS keys.
B. Use AWS Certificate Manager to generate certificates that can be used to encrypt the connections between the EC2 instances and RDS.
C. Use Amazon Data Lifecycle Manager to encrypt all data as it is stored to the EBS volumes and RDS database.
D. Configure SSL/TLS encryption using AWS KMS customer master keys (CMKs) to encrypt database volumes.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 218). Kindle Edition.
A. Configure encryption for the Amazon EBS volumes and Amazon RDS database with AWS KMS keys.
Explanation:
As the data is stored both in the EBS volumes (temporarily) and the RDS database, both the EBS and RDS volumes must be encrypted at rest. This can be achieved by enabling encryption at creation time of the volume and AWS KMS keys can be used to encrypt the data. This solution meets all requirements. CORRECT: “Configure encryption for the Amazon EBS volumes and Amazon RDS database with AWS KMS keys” is the correct answer. INCORRECT: “Use AWS Certificate Manager to generate certificates that can be used to encrypt the connections between the EC2 instances and RDS” is incorrect. This would encrypt the data in-transit but not at-rest. INCORRECT: “Use Amazon Data Lifecycle Manager to encrypt all data as it is stored to the EBS volumes and RDS database” is incorrect. DLM is used for automating the process of taking and managing snapshots for EBS volumes. INCORRECT: “Configure SSL/TLS encryption using AWS KMS customer master keys (CMKs) to encrypt database volumes” is incorrect. You cannot configure SSL/TLS encryption using KMS CMKs or use SSL/TLS to encrypt data at rest.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 218-219). Kindle Edition.
A company runs an eCommerce application that uses an Amazon Aurora database. The database performs well except for short periods when monthly sales reports are run. A Solutions Architect has reviewed metrics in Amazon CloudWatch and found that the Read Ops and CPUUtilization metrics are spiking during the periods when the sales reports are run. What is the MOST cost-effective solution to solve this performance issue?
A. Create an Amazon Redshift data warehouse and run the reporting there.
B. Modify the Aurora database to use an instance class with more CPU.
C. Create an Aurora Replica and use the replica endpoint for reporting.
D. Enable storage Auto Scaling for the Amazon Aurora database.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 219-220). Kindle Edition.
C. Create an Aurora Replica and use the replica endpoint for reporting.
Explanation:
The simplest and most cost-effective option is to use an Aurora Replica. The replica can serve read operations which will mean the reporting application can run reports on the replica endpoint without causing any performance impact on the production database. CORRECT: “Create an Aurora Replica and use the replica endpoint for reporting” is the correct answer. INCORRECT: “Enable storage Auto Scaling for the Amazon Aurora database” is incorrect. Aurora storage automatically scales based on volumes, there is no storage auto scaling feature for Aurora. INCORRECT: “Create an Amazon Redshift data warehouse and run the reporting there” is incorrect. This would be less cost-effective and require more work in copying the data to the data warehouse. INCORRECT: “Modify the Aurora database to use an instance class with more CPU” is incorrect. This may not resolve the storage performance issues and could be more expensive depending on instances sizes.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 220). Kindle Edition.
A company runs an application on Amazon EC2 instances which requires access to sensitive data in an Amazon S3 bucket. All traffic between the EC2 instances and the S3 bucket must not traverse the internet and must use private IP addresses. Additionally, the bucket must only allow access from services in the VPC. Which combination of actions should a Solutions Architect take to meet these requirements? (Select TWO.)
A. Create a VPC endpoint for Amazon S3.
B. Apply a bucket policy to restrict access to the S3 endpoint.
C. Enable default encryption on the bucket.
D. Create a peering connection to the S3 bucket VPC.
E. Apply an IAM policy to a VPC peering connection.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 221). Kindle Edition.
A. Create a VPC endpoint for Amazon S3.
B. Apply a bucket policy to restrict access to the S3 endpoint.
Explanation:
Private access to public services such as Amazon S3 can be achieved by creating a VPC endpoint in the VPC. For S3 this would be a gateway endpoint. The bucket policy can then be configured to restrict access to the S3 endpoint only which will ensure that only services originating from the VPC will be granted access. CORRECT: “Create a VPC endpoint for Amazon S3” is a correct answer. CORRECT: “Apply a bucket policy to restrict access to the S3 endpoint” is also a correct answer. INCORRECT: “Enable default encryption on the bucket” is incorrect. This will encrypt data at rest but does not restrict access. INCORRECT: “Create a peering connection to the S3 bucket VPC” is incorrect. You cannot create a peering connection to S3 as it is a public service and does not run in a VPC. INCORRECT: “Apply an IAM policy to a VPC peering connection” is incorrect. You cannot apply an IAM policy to a peering connection.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 221-222). Kindle Edition.
A company wants to migrate a legacy web application from an on-premises data center to AWS. The web application consists of a web tier, an application tier, and a MySQL database. The company does not want to manage instances or clusters. Which combination of services should a solutions architect include in the overall architecture? (Select TWO.)
A. Amazon DynamoDB
B. Amazon RDS for MySQL
C. Amazon EC2 Spot Instances
D. Amazon Kinesis Data Streams
E. AWS Fargate
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 223). Kindle Edition.
B. Amazon RDS for MySQL
E. AWS Fargate
Explanation:
Amazon RDS is a managed service and you do not need to manage the instances. This is an ideal backend for the application and you can run a MySQL database on RDS without any refactoring. For the application components these can run on Docker containers with AWS Fargate. Fargate is a serverless service for running containers on AWS. CORRECT: “AWS Fargate” is a correct answer. CORRECT: “Amazon RDS for MySQL” is also a correct answer. INCORRECT: “Amazon DynamoDB” is incorrect. This is a NoSQL database and would be incompatible with the relational MySQL DB. INCORRECT: “Amazon EC2 Spot Instances” is incorrect. This would require managing instances. INCORRECT: “Amazon Kinesis Data Streams” is incorrect. This is a service for streaming data.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 223). Kindle Edition.
A web application is being deployed on an Amazon ECS cluster using the Fargate launch type. The application is expected to receive a large volume of traffic initially. The company wishes to ensure that performance is good for the launch and that costs reduce as demand decreases What should a solutions architect recommend?
A. Use Amazon EC2 Auto Scaling to scale out on a schedule and back in once the load decreases.
B. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm.
C. Use Amazon ECS Service Auto Scaling with target tracking policies to scale when an Amazon CloudWatch alarm is breached.
D. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when an Amazon CloudWatch alarm is breached.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 224). Kindle Edition.
C. Use Amazon ECS Service Auto Scaling with target tracking policies to scale when an Amazon CloudWatch alarm is breached.
Explanation:
Amazon ECS uses the AWS Application Auto Scaling service to scales tasks. This is configured through Amazon ECS using Amazon ECS Service Auto Scaling. A Target Tracking Scaling policy increases or decreases the number of tasks that your service runs based on a target value for a specific metric. For example, in the image below the tasks will be scaled when the average CPU breaches 80% (as reported by CloudWatch): CORRECT: “Use Amazon ECS Service Auto Scaling with target tracking policies to scale when an Amazon CloudWatch alarm is breached” is the correct answer. INCORRECT: “Use Amazon EC2 Auto Scaling with simple scaling policies to scale when an Amazon CloudWatch alarm is breached” is incorrect INCORRECT: “Use Amazon EC2 Auto Scaling to scale out on a schedule and back in once the load decreases” is incorrect INCORRECT: “Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm” is incorrect
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 224-225). Kindle Edition.
A company runs several NFS file servers in an on-premises data center. The NFS servers must run periodic backups to Amazon S3 using automatic synchronization for small volumes of data. Which solution meets these requirements and is MOST cost-effective?
A. Set up AWS Glue to extract the data from the NFS shares and load it into Amazon S3.
B. Set up an AWS DataSync agent on the on-premises servers and sync the data to Amazon S3.
C. Set up an SFTP sync using AWS Transfer for SFTP to sync data from on premises to Amazon S3.
D. Set up an AWS Direct Connect connection between the on-premises data center and AWS and copy the data to Amazon S3.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 226). Kindle Edition.
B. Set up an AWS DataSync agent on the on-premises servers and sync the data to Amazon S3.
Explanation:
AWS DataSync is an online data transfer service that simplifies, automates, and accelerates copying large amounts of databetween on-premises systems and AWS Storage services, as well as between AWS Storage services. DataSync can copy data between Network File System (NFS) shares, or Server Message Block (SMB) shares,self-managed object storage,AWS Snowcone,Amazon Simple Storage Service (Amazon S3)buckets,Amazon Elastic File System (Amazon EFS)file systems, andAmazon FSx for Windows File Serverfile systems. This is the most cost-effective solution from the answer options available. CORRECT: “Set up an AWS DataSync agent on the on-premises servers and sync the data to Amazon S3” is the correct answer. INCORRECT: “Set up an SFTP sync using AWS Transfer for SFTP to sync data from on premises to Amazon S3” is incorrect. This solution does not provide the scheduled synchronization features of AWS DataSync and is more expensive. INCORRECT: “Set up AWS Glue to extract the data from the NFS shares and load it into Amazon S3” is incorrect. AWS Glue is an ETL service and cannot be used for copying data to Amazon S3 from NFS shares. INCORRECT: “Set up an AWS Direct Connect connection between the on-premises data center and AWS and copy the data to Amazon S3” is incorrect. An AWS Direct Connect connection is an expensive option and no solution is provided for automatic synchronization.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 226-227). Kindle Edition.
An organization plans to deploy a higher performance computing (HPC) workload on AWS using Linux. The HPC workload will use many Amazon EC2 instances and will generate a large quantity of small output files that must be stored in persistent storage for future use. A Solutions Architect must design a solution that will enable the EC2 instances to access data using native file system interfaces and to store output files in cost-effective long-term storage. Which combination of AWS services meets these requirements?
A. Amazon FSx for Lustre with Amazon S3.
B. Amazon FSx for Windows File Server with Amazon S3.
C. Amazon EBS volumes with Amazon S3 Glacier.
D. AWS DataSync with Amazon S3 Intelligent tiering.
Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 227-228). Kindle Edition.
A. Amazon FSx for Lustre with Amazon S3.
Explanation:
Amazon FSx for Lustre is ideal for high performance computing (HPC) workloads running on Linux instances. FSx for Lustre provides a native file system interface and works as any file system does with your Linux operating system. When linked to an Amazon S3 bucket, FSx for Lustre transparently presents objects as files, allowing you to run your workload without managing data transfer from S3. This solution provides all requirements as it enables Linux workloads to use the native file system interfaces and to use S3 for long-term and cost-effective storage of output files. CORRECT: “Amazon FSx for Lustre with Amazon S3” is the correct answer. INCORRECT: “Amazon FSx for Windows File Server with Amazon S3” is incorrect. This service should be used with Windows instances and does not integrate with S3. INCORRECT: “Amazon EBS volumes with Amazon S3 Glacier” is incorrect. EBS volumes do not provide the shared, high performance storage solution using file system interfaces. INCORRECT: “AWS DataSync with Amazon S3 Intelligent tiering” is incorrect. AWS DataSync is used for migrating / synchronizing data.
An application has been deployed on Amazon EC2 instances behind an Application Load Balancer (ALB). A Solutions Architect must improve the security posture of the application and minimize the impact of a DDoS attack on resources. Which of the following solutions is MOST effective?
A. Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application Load Balancer.
B. Create a custom AWS Lambda function that monitors for suspicious traffic and modifies a network ACL when a potential DDoS attack is identified.
C. Enable VPC Flow Logs and store them in Amazon S3. Use Amazon Athena to parse the logs and identify and block potential DDoS attacks.
D. Enable access logs on the Application Load Balancer and configure Amazon CloudWatch to monitor the access logs and trigger a Lambda function when potential attacks are identified. Configure the Lambda function to modify the ALBs security group and block the attack.
A. Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application Load Balancer.
Explanation:
A rate-based rule tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. You set the limit as the number of requests per 5-minute time span. You can use this type of rule to put a temporary block on requests from an IP address that’s sending excessive requests. By default, AWS WAF aggregates requests based on the IP address from the web request origin, but you can configure the rule to use an IP address from an HTTP header, likeX-Forwarded-For, instead. CORRECT: “Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application Load Balancer” is the correct answer. INCORRECT: “Create a custom AWS Lambda function that monitors for suspicious traffic and modifies a network ACL when a potential DDoS attack is identified” is incorrect. There’s not description here of how Lambda is going to monitor for traffic. INCORRECT: “Enable VPC Flow Logs and store them in Amazon S3. Use Amazon Athena to parse the logs and identify and block potential DDoS attacks” is incorrect. Amazon Athena is not able to block DDoS attacks, another service would be needed. INCORRECT: “Enable access logs on the Application Load Balancer and configure Amazon CloudWatch to monitor the access logs and trigger a Lambda function when potential attacks are identified. Configure the Lambda function to modify the ALBs security group and block the attack” is incorrect. Access logs are exported to S3 but not to CloudWatch. Also, it would not be possible to block an attack from a specific IP using a security group (while still allowing any other source access) as they do not support deny rules.
An automotive company plans to implement IoT sensors in manufacturing equipment that will send data to AWS in real time. The solution must receive events in an ordered manner from each asset and ensure that the data is saved for future processing. Which solution would be MOST efficient?
A. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
B. Use Amazon Kinesis Data Streams for real-time events with a shard for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon EBS.
C. Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS.
D. Use an Amazon SQS standard queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3
A. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
Explanation:
Amazon Kinesis Data Streams is the ideal service for receiving streaming data. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream. Therefore, a separate partition (rather than shard) should be used for each equipment asset. Amazon Kinesis Firehose can be used to receive streaming data from Data Streams and then load the data into Amazon S3 for future processing. CORRECT: “Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3” is the correct answer. INCORRECT: “Use Amazon Kinesis Data Streams for real-time events with a shard for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon EBS” is incorrect. A partition should be used rather than a shard as explained above. INCORRECT: “Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS” is incorrect. Amazon SQS cannot be used for real-time use cases. INCORRECT: “Use an Amazon SQS standard queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3” is incorrect. Amazon SQS cannot be used for real-time use cases
An IoT sensor is being rolled out to thousands of a company’s existing customers. The sensors will stream high volumes of data each second to a central location. A solution must be designed to ingest and store the data for analytics. The solution must provide near-real time performance and millisecond responsiveness. Which solution should a Solutions Architect recommend?
A. Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda function and then store the data in Amazon RedShift.
B. Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon DynamoDB.
C. Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda function and then store the data in Amazon DynamoDB.
D. Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon RedShift.
B. Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon DynamoDB.
Explanation:
A Kinesis data stream is a set ofshards. Each shard contains a sequence of data records. Aconsumeris an application that processes the data from a Kinesis data stream. You can map a Lambda function to a shared-throughput consumer (standard iterator), or to a dedicated-throughput consumer withenhanced fan-out. Amazon DynamoDB is the best database for this use case as it supports near-real time performance and millisecond responsiveness.
CORRECT: “Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon DynamoDB” is the correct answer. INCORRECT: “Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon RedShift” is incorrect. Amazon RedShift cannot provide millisecond responsiveness. INCORRECT: “Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda function and then store the data in Amazon RedShift” is incorrect. Amazon SQS does not provide near real-time performance and RedShift does not provide millisecond responsiveness. INCORRECT: “Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda function and then store the data in Amazon DynamoDB” is incorrect. Amazon SQS does not provide near real-time performance.
A company runs a number of core enterprise applications in an on-premises data center. The data center is connected to an Amazon VPC using AWS Direct Connect. The company will be creating additional AWS accounts and these accounts will also need to be quickly, and cost-effectively connected to the on-premises data center in order to access the core applications. What deployment changes should a Solutions Architect implement to meet these requirements with the LEAST operational overhead?
A. Create a Direct Connect connection in each new account. Route the network traffic to the on-premises servers.
B. Configure VPC endpoints in the Direct Connect VPC for all required services. Route the network traffic to the on-premises servers.
C. Create a VPN connection between each new account and the Direct Connect VPC. Route the network traffic to the on-premises servers.
D. Configure AWS Transit Gateway between the accounts. Assign Direct Connect to the transit gateway and route network traffic to the on-premises servers
D. Configure AWS Transit Gateway between the accounts. Assign Direct Connect to the transit gateway and route network traffic to the on-premises servers
Explanation:
AWS Transit Gateway connects VPCs and on-premises networks through a central hub. With AWS Transit Gateway, you can quickly add Amazon VPCs, AWS accounts, VPN capacity, or AWS Direct Connect gateways to meet unexpected demand, without having to wrestle with complex connections or massive routing tables. This is the operationally least complex solution and is also cost-effective. The image below depicts how transit gateway can assist with simplifying network deployments: CORRECT: “Configure AWS Transit Gateway between the accounts. Assign Direct Connect to the transit gateway and route network traffic to the on-premises servers” is the correct answer. INCORRECT: “Create a VPN connection between each new account and the Direct Connect VPC. Route the network traffic to the on-premises servers” is incorrect. You cannot connect VPCs using using AWS managed VPNs and would need to configure a software VPN and then complex routing configurations. This is not the best solution. INCORRECT: “Create a Direct Connect connection in each new account. Route the network traffic to the on-premises servers” is incorrect. This is an expensive solution as you would need to have multiple Direct Connect links. INCORRECT: “Configure VPC endpoints in the Direct Connect VPC for all required services. Route the network traffic to the on-premises servers” is incorrect. You cannot create VPC endpoints for all services and this would be a complex solution for those you can.
A solutions architect has been tasked with designing a highly resilient hybrid cloud architecture connecting an on-premises data center and AWS. The network should include AWS Direct Connect (DX). Which DX configuration offers the HIGHEST resiliency?
A. Configure a DX connection with an encrypted VPN on top of it.
B. Configure multiple public VIFs on top of a DX connection.
C. Configure multiple private VIFs on top of a DX connection.
D. Configure DX connections at multiple DX locations.
D. Configure DX connections at multiple DX locations.
Explanation:
The most resilient solution is to configure DX connections at multiple DX locations. This ensures that any issues impacting a single DX location do not affect availability of the network connectivity to AWS. Take note of the following AWS recommendations for resiliency: AWS recommends connecting from multiple data centers for physical location redundancy. When designing remote connections, consider using redundant hardware and telecommunications providers. Additionally, it is a best practice to use dynamically routed, active/active connections for automatic load balancing and failover across redundant network connections. Provision sufficient network capacity to ensure that the failure of one network connection does not overwhelm and degrade redundant connections. The diagram below is an example of an architecture that offers high resiliency: CORRECT: “Configure DX connections at multiple DX locations” is the correct answer. INCORRECT: “Configure a DX connection with an encrypted VPN on top of it” is incorrect. A VPN that is separate to the DX connection can be a good backup. But a VPN on top of the DX connection does not help. Also, encryption provides security but not resilience. INCORRECT: “Configure multiple public VIFs on top of a DX connection” is incorrect. Virtual interfaces do not add resiliency as resiliency must be designed into the underlying connection.
A website is running on Amazon EC2 instances and access is restricted to a limited set of IP ranges. A solutions architect is planning to migrate static content from the website to an Amazon S3 bucket configured as an origin for an Amazon CloudFront distribution. Access to the static content must be restricted to the same set of IP addresses. Which combination of steps will meet these requirements? (Select TWO.)
A. Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects.
B. Create an origin access identity (OAI) and associate it with the distribution. Generate presigned URLs that limit access to the OAI.
C. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the Amazon S3 bucket.
D. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution.
E. Attach the existing security group that contains the IP restrictions to the Amazon CloudFront distribution.
A. Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects.
D. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution.
Explanation:
To prevent users from circumventing the controls implemented on CloudFront (using WAF or presigned URLs / signed cookies) you can use an origin access identity (OAI). An OAI is a special CloudFront user that you associate with a distribution. The next step is to change the permissions either on your Amazon S3 bucket or on the files in your bucket so that only the origin access identity has read permission (or read and download permission). This can be implemented through a bucket policy. To control access at the CloudFront layer the AWS Web Application Firewall (WAF) can be used. With WAF you must create an ACL that includes the IP restrictions required and then associate the web ACL with the CloudFront distribution. CORRECT: “Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects” is a correct answer. CORRECT: “Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution” is also a correct answer. INCORRECT: “Create an origin access identity (OAI) and associate it with the distribution. Generate presigned URLs that limit access to the OAI” is incorrect. Presigned URLs can be used to protect access to CloudFront but they cannot be used to limit access to an OAI. INCORRECT: “Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the Amazon S3 bucket” is incorrect. The Web ACL should be associated with CloudFront, not S3.
A company is storing a large quantity of small files in an Amazon S3 bucket. An application running on an Amazon EC2 instance needs permissions to access and process the files in the S3 bucket. Which action will MOST securely grant the EC2 instance access to the S3 bucket?
A. Create a bucket ACL on the S3 bucket and configure the EC2 instance ID as a grantee.
B. Create an IAM role with least privilege permissions and attach it to the EC2 instance profile.
C. Create an IAM user for the application with specific permissions to the S3 bucket.
D. Generate access keys and store the credentials on the EC2 instance for use in making API calls.
B. Create an IAM role with least privilege permissions and attach it to the EC2 instance profile.
Explanation:
IAM roles should be used in place of storing credentials on Amazon EC2 instances. This is the most secure way to provide permissions to EC2 as no credentials are stored and short-lived credentials are obtained using AWS STS. Additionally, the policy attached to the role should provide least privilege permissions. CORRECT: “Create an IAM role with least privilege permissions and attach it to the EC2 instance profile” is the correct answer. INCORRECT: “Generate access keys and store the credentials on the EC2 instance for use in making API calls” is incorrect. This is not best practice, IAM roles are preferred. INCORRECT: “Create an IAM user for the application with specific permissions to the S3 bucket” is incorrect. Instances should use IAM Roles for delegation not user accounts. INCORRECT: “Create a bucket ACL on the S3 bucket and configure the EC2 instance ID as a grantee” is incorrect. You cannot configure an EC2 instance ID on a bucket ACL and bucket ACLs cannot be used to restrict access in this scenario.
A company requires a solution to allow customers to customize images that are stored in an online catalog. The image customization parameters will be sent in requests to Amazon API Gateway. The customized image will then be generated on-demand and can be accessed online. The solutions architect requires a highly available solution. Which solution will be MOST cost-effective?
A. Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances
B. Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
C. Use AWS Lambda to manipulate the original images to the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances
D. Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
B. Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
Explanation:
All solutions presented are highly available. The key requirement that must be satisfied is that the solution should be cost-effective and you must choose the most cost-effective option. Therefore, it’s best to eliminate services such as Amazon EC2 and ELB as these require ongoing costs even when they’re not used. Instead, a fully serverless solution should be used. AWS Lambda, Amazon S3 and CloudFront are the best services to use for these requirements. CORRECT: “Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin” is the correct answer. INCORRECT: “Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances” is incorrect. This is not the most cost-effective option as the ELB and EC2 instances will incur costs even when not used. INCORRECT: “Use AWS Lambda to manipulate the original images to the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances” is incorrect. This is not the most cost-effective option as the ELB will incur costs even when not used. Also, Amazon DynamoDB will incur RCU/WCUs when running and is not the best choice for storing images. INCORRECT: “Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin” is incorrect. This is not the most cost-effective option as the EC2 instances will incur costs even when not used
A solutions architect is finalizing the architecture for a distributed database that will run across multiple Amazon EC2 instances. Data will be replicated across all instances so the loss of an instance will not cause loss of data. The database requires block storage with low latency and throughput that supports up to several million transactions per second per server. Which storage solution should the solutions architect use?
A. Amazon EBS
B. Amazon EC2 instance store
C. Amazon EFS
D. Amazon S3
B. Amazon EC2 instance store
Explanation:
Aninstance storeprovides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
Some instance types use NVMe or SATA-based solid state drives (SSD) to deliver high random I/O performance. This is a good option when you need storage with very low latency, but you don’t need the data to persist when the instance terminates or you can take advantage of fault-tolerant architectures. In this scenario the data is replicated and fault tolerant so the best option to provide the level of performance required is to use instance store volumes. CORRECT: “Amazon EC2 instance store” is the correct answer. INCORRECT: “Amazon EBS “ is incorrect. The Elastic Block Store (EBS) is a block storage device but as the data is distributed and fault tolerant a better option for performance would be to use instance stores. INCORRECT: “Amazon EFS “ is incorrect as EFS is not a block device, it is a filesystem that is accessed using the NFS protocol. INCORRECT: “Amazon S3” is incorrect as S3 is an object-based storage system, not a block-based storage system