Set 3 Kindle SAA-003 Practice Test Flashcards
A security officer requires that access to company financial reports is logged. The reports are stored in an Amazon S3 bucket. Additionally, any modifications to the log files must be detected. Which actions should a solutions architect take?
A. Use S3 server access logging on the bucket that houses the reports with the read and write data events and the log file validation options enabled
B. Use S3 server access logging on the bucket that houses the reports with the read and write management events and log file validation options enabled
C. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation
D. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write management events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation
C. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation
Explanation:
Amazon CloudTrail can be used to log activity on the reports. The key difference between the two answers that include CloudTrail is that one references data events whereas the other references management events. Data events provide visibility into the resource operations performed on or within a resource. These are also known as data plane operations. Data events are often high-volume activities. Example data events include: Amazon S3 object-level API activity (for example, GetObject, DeleteObject, and PutObject API operations). AWS Lambda function execution activity (theInvokeAPI). Management events provide visibility into management operations that are performed on resources in your AWS account. These are also known as control plane operations. Example management events include: Configuring security (for example, IAMAttachRolePolicyAPI operations) Registering devices (for example, Amazon EC2CreateDefaultVpcAPI operations). Therefore, to log data about access to the S3 objects the solutions architect should log read and write data events. Log file validation can also be enabled on the destination bucket: CORRECT: “Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation” is the correct answer. INCORRECT: “Use AWS CloudTrail to create a new trail. Configure the trail to log read and write management events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation” is incorrect as data events should be logged rather than management events. INCORRECT: “Use S3 server access logging on the bucket that houses the reports with the read and write data events and the log file validation options enabled” is incorrect as server access logging does not have an option for choosing data events or log file validation. INCORRECT: “Use S3 server access logging on the bucket that houses the reports with the read and write management events and log file validation options enabled” is incorrect as server access logging does not have an option for choosing management events or log file validation.
A company operates a production web application that uses an Amazon RDS MySQL database. The database has automated, non-encrypted daily backups. To increase the security of the data, it has been recommended that encryption should be enabled for backups. Unencrypted backups will be destroyed after the first encrypted backup has been completed. What should be done to enable encryption for future backups?
A. Enable default encryption for the Amazon S3 bucket where backups are stored B. Modify the backup section of the database configuration to toggle the
Enable encryption check box
C. Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot
D. Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary. Remove the original database instance
C. Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot
Explanation:
Amazon RDS uses snapshots for backup. Snapshots are encrypted when created only if the database is encrypted and you can only select encryption for the database when you first create it. In this case the database, and hence the snapshots, ad unencrypted. However, you can create an encrypted copy of a snapshot. You can restore using that snapshot which creates a new DB instance that has encryption enabled. From that point on encryption will be enabled for all snapshots. CORRECT: “Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot” is the correct answer. INCORRECT: “Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary. Remove the original database instance” is incorrect as you cannot create an encrypted read replica from an unencrypted master. INCORRECT: “Modify the backup section of the database configuration to toggle the Enable encryption check box” is incorrect as you cannot add encryption for an existing database. INCORRECT: “Enable default encryption for the Amazon S3 bucket where backups are stored” is incorrect because you do not have access to the S3 bucket in which snapshots are stored.
A company has deployed an API in a VPC behind an internal Network Load Balancer (NLB). An application that consumes the API as a client is deployed in a second account in private subnets. Which architectural configurations will allow the API to be consumed without using the public Internet? (Select TWO.)
A. Configure a VPC peering connection between the two VPCs. Access the API using the private address
B. Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address
C. Configure a ClassicLink connection for the API into the client VPC. Access the API using the ClassicLink address
D. Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address
E. Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address
A. Configure a VPC peering connection between the two VPCs. Access the API using the private address
D. Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address
Explanation:
You can create your own application in your VPC and configure it as an AWS PrivateLink-powered service (referred to as anendpoint service). Other AWS principals can create a connection from their VPC to your endpoint service using aninterface VPC endpoint. You are theservice provider, and the AWS principals that create connections to your service areservice consumers. This configuration is powered by AWS PrivateLink and clients do not need to use an internet gateway, NAT device, VPN connection or AWS Direct Connect connection, nor do they require public IP addresses. Another option is to use a VPC Peering connection. A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. CORRECT: “Configure a VPC peering connection between the two VPCs. Access the API using the private address” is a correct answer. CORRECT: “Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address” is also a correct answer. INCORRECT: “Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address” is incorrect. Direct Connect is used for connecting from on-premises data centers into AWS. It is not used from one VPC to another. INCORRECT: “Configure a ClassicLink connection for the API into the client VPC. Access the API using the ClassicLink address” is incorrect. ClassicLink allows you to link EC2-Classic instances to a VPC in your account, within the same Region. This is not relevant to sending data between two VPCs. INCORRECT: “Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address” is incorrect. AWS RAM lets you share resources that are provisioned and managed in other AWS services. However, APIs are not shareable resources with AWS RAM.
An application runs on Amazon EC2 Linux instances. The application generates log files which are written using standard API calls. A storage solution is required that can be used to store the files indefinitely and must allow concurrent access to all files. Which storage service meets these requirements and is the MOST cost-effective?
A. Amazon EBS
B. Amazon EFS
C. Amazon EC2 instance store
D. Amazon S3
D. Amazon S3
Explanation:
The application is writing the files using API calls which means it will be compatible with Amazon S3 which uses a REST API. S3 is a massively scalable key-based object store that is well-suited to allowing concurrent access to the files from many instances. Amazon S3 will also be the most cost-effective choice. A rough calculation using the AWS pricing calculator shows the cost differences between 1TB of storage on EBS, EFS, and S3 Standard. CORRECT: “Amazon S3” is the correct answer. INCORRECT: “Amazon EFS” is incorrect as though this does offer concurrent access from many EC2 Linux instances, it is not the most cost-effective solution. INCORRECT: “Amazon EBS” is incorrect. The Elastic Block Store (EBS) is not a good solution for concurrent access from many EC2 instances and is not the most cost-effective option either. EBS volumes are mounted to a single instance except when using multi-attach which is a new feature and has several constraints. INCORRECT: “Amazon EC2 instance store” is incorrect as this is an ephemeral storage solution which means the data is lost when powered down. Therefore, this is not an option for long-term data storage.
A production application runs on an Amazon RDS MySQL DB instance. A solutions architect is building a new reporting tool that will access the same data. The reporting tool must be highly available and not impact the performance of the production application. How can this be achieved?
A. Create a cross-region Multi-AZ deployment and create a read replica in the second region
B. Create a Multi-AZ RDS Read Replica of the production RDS DB instance
C. Use Amazon Data Lifecycle Manager to automatically create and manage snapshots
D. Create a Single-AZ RDS Read Replica of the production RDS DB instance. Create a second Single-AZ RDS Read Replica from the replica
B. Create a Multi-AZ RDS Read Replica of the production RDS DB instance
Explanation:
You can create a read replica as a Multi-AZ DB instance. Amazon RDS creates a standby of your replica in another Availability Zone for failover support for the replica. Creating your read replica as a Multi-AZ DB instance is independent of whether the source database is a Multi-AZ DB instance. CORRECT: “Create a Multi-AZ RDS Read Replica of the production RDS DB instance” is the correct answer. INCORRECT: “Create a Single-AZ RDS Read Replica of the production RDS DB instance. Create a second Single-AZ RDS Read Replica from the replica” is incorrect. Read replicas are primarily used for horizontal scaling. The best solution for high availability is to use a Multi-AZ read replica. INCORRECT: “Create a cross-region Multi-AZ deployment and create a read replica in the second region” is incorrect as you cannot create a cross-region Multi-AZ deployment with RDS. INCORRECT: “Use Amazon Data Lifecycle Manager to automatically create and manage snapshots” is incorrect as using snapshots is not the best solution for high availability.
An online store uses an Amazon Aurora database. The database is deployed as a Multi-AZ deployment. Recently, metrics have shown that database read requests are high and causing performance issues which result in latency for write requests. What should the solutions architect do to separate the read requests from the write requests?
A. Enable read through caching on the Amazon Aurora database
B. Update the application to read from the Aurora Replica
C. Create a read replica and modify the application to use the appropriate endpoint
D. Create a second Amazon Aurora database and link it t
B. Update the application to read from the Aurora Replica
Explanation:
Aurora Replicas are independent endpoints in an Aurora DB cluster, best used for scaling read operations and increasing availability. Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region. The DB cluster volume is made up of multiple copies of the data for the DB cluster. However, the data in the cluster volume is represented as a single, logical volume to the primary instance and to Aurora Replicas in the DB cluster. As well as providing scaling for reads, Aurora Replicas are also targets for multi-AZ. In this case the solutions architect can update the application to read from the Aurora Replica. CORRECT: “ Update the application to read from the Aurora Replica” is the correct answer. INCORRECT: “Create a read replica and modify the application to use the appropriate endpoint” is incorrect. An Aurora Replica is both a standby in a Multi-AZ configuration and a target for read traffic. The architect simply needs to direct traffic to the Aurora Replica. INCORRECT: “Enable read through caching on the Amazon Aurora database.” is incorrect as this is not a feature of Amazon Aurora. INCORRECT: “Create a second Amazon Aurora database and link it to the primary database as a read replica” is incorrect as an Aurora Replica already exists as this is a Multi-AZ configuration and the standby is an Aurora Replica that can be used for read traffic.
An application is deployed on multiple AWS regions and accessed from around the world. The application exposes static public IP addresses. Some users are experiencing poor performance when accessing the application over the Internet. What should a solutions architect recommend to reduce internet latency?
A. Set up AWS Global Accelerator and add endpoints
B. Set up AWS Direct Connect locations in multiple Regions
C. Set up an Amazon CloudFront distribution to access an application
D. Set up an Amazon Route 53 geoproximity routing policy to route traffic
A. Set up AWS Global Accelerator and add endpoints
Explanation:
AWS Global Accelerator is a service in which you createacceleratorsto improve availability and performance of your applications for local and global users. Global Accelerator directs traffic to optimal endpoints over the AWS global network. This improves the availability and performance of your internet applications that are used by a global audience. Global Accelerator is a global service that supports endpoints in multiple AWS Regions, which are listed in theAWS Region Table. By default, Global Accelerator provides you with two static IP addresses that you associate with your accelerator. (Or, instead of using the IP addresses that Global Accelerator provides, you can configure these entry points to be IPv4 addresses from your own IP address ranges that you bring to Global Accelerator.) The static IP addresses are anycast from the AWS edge network and distribute incoming application traffic across multiple endpoint resources in multiple AWS Regions, which increases the availability of your applications. Endpoints can be Network Load Balancers, Application Load Balancers, EC2 instances, or Elastic IP addresses that are located in one AWS Region or multiple Regions. CORRECT: “Set up AWS Global Accelerator and add endpoints” is the correct answer. INCORRECT: “Set up AWS Direct Connect locations in multiple Regions” is incorrect as this is used to connect from an on-premises data center to AWS. It does not improve performance for users who are not connected to the on-premises data center. INCORRECT: “Set up an Amazon CloudFront distribution to access an application” is incorrect as CloudFront cannot expose static public IP addresses. INCORRECT: “Set up an Amazon Route 53 geoproximity routing policy to route traffic” is incorrect as this does not reduce internet latency as well as using Global Accelerator. GA will direct users to the closest edge location and then use the AWS global network.
A new application will be launched on an Amazon EC2 instance with an Elastic Block Store (EBS) volume. A solutions architect needs to determine the most cost-effective storage option. The application will have infrequent usage, with peaks of traffic for a couple of hours in the morning and evening. Disk I/O is variable with peaks of up to 3,000 IOPS. Which solution should the solutions architect recommend?
A. Amazon EBS Cold HDD (sc1)
B. Amazon EBS General Purpose SSD (gp2)
C. Amazon EBS Provisioned IOPS SSD (io1)
D. Amazon EBS Throughput Optimized HDD (st1)
B. Amazon EBS General Purpose SSD (gp2)
Explanation:
General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time. Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 16,000 IOPS (at 5,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size. AWS designsgp2volumes to deliver their provisioned performance 99% of the time. Agp2volume can range in size from 1 GiB to 16 TiB. In this case the volume would have a baseline performance of 3 x 200 = 600 IOPS. The volume could also burst to 3,000 IOPS for extended periods. As the I/O varies, this should be suitable. CORRECT: “Amazon EBS General Purpose SSD (gp2)” is the correct answer. INCORRECT: “Amazon EBS Provisioned IOPS SSD (io1) “ is incorrect as this would be a more expensive option and is not required for the performance characteristics of this workload. INCORRECT: “Amazon EBS Cold HDD (sc1)” is incorrect as there is no IOPS SLA for HDD volumes and they would likely not perform well enough for this workload. INCORRECT: “Amazon EBS Throughput Optimized HDD (st1)” is incorrect as there is no IOPS SLA for HDD volumes and they would likely not perform well enough for this workload.
A security team wants to limit access to specific services or actions in all of the team’s AWS accounts. All accounts belong to a large organization in AWS Organizations. The solution must be scalable and there must be a single point where permissions can be maintained. What should a solutions architect do to accomplish this?
A. Create an ACL to provide access to the services or actions
B. Create a security group to allow accounts and attach it to user groups
C. Create cross-account roles in each account to deny access to the services or actions
D. Create a service control policy in the root organizational unit to deny access to the services or actions
D. Create a service control policy in the root organizational unit to deny access to the services or actions
Explanation:
Service control policies (SCPs) offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines. SCPs alone are not sufficient for allowing access in the accounts in your organization. Attaching an SCP to an AWS Organizations entity (root, OU, or account) defines a guardrail for what actions the principals can perform. You still need to attachidentity-based or resource-based policiesto principals or resources in your organization’s accounts to actually grant permissions to them. CORRECT: “Create a service control policy in the root organizational unit to deny access to the services or actions” is the correct answer. INCORRECT: “Create an ACL to provide access to the services or actions” is incorrect as access control lists are not used for permissions associated with IAM. Permissions policies are used with IAM. INCORRECT: “Create a security group to allow accounts and attach it to user groups” is incorrect as security groups are instance level firewalls. They do not limit service actions. INCORRECT: “Create cross-account roles in each account to deny access to the services or actions” is incorrect as this is a complex solution and does not provide centralized control
A company is planning to use Amazon S3 to store documents uploaded by its customers. The images must be encrypted at rest in Amazon S3. The company does not want to spend time managing and rotating the keys, but it does want to control who can access those keys. What should a solutions architect use to accomplish this?
A. Server-Side Encryption with keys stored in an S3 bucket
B. Server-Side Encryption with Customer-Provided Keys (SSE-C)
C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
D. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
D. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
Explanation:
SSE-KMS requires that AWS manage the data key but you manage thecustomer master key(CMK) in AWS KMS. You can choose acustomer managed CMKor theAWS managed CMKfor Amazon S3 in your account. Customer managed CMKsare CMKs in your AWS account that you create, own, and manage. You have full control over these CMKs, including establishing and maintaining theirkey policies, IAM policies, and grants,enabling and disablingthem,rotating their cryptographic material,adding tags,creating aliasesthat refer to the CMK, andscheduling the CMKs for deletion. For this scenario, the solutions architect should use SSE-KMS with a customer managed CMK. That way KMS will manage the data key but the company can configure key policies defining who can access the keys. CORRECT: “Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)” is the correct answer. INCORRECT: “Server-Side Encryption with keys stored in an S3 bucket” is incorrect as you cannot store your keys in a bucket with server-side encryption INCORRECT: “Server-Side Encryption with Customer-Provided Keys (SSE-C)” is incorrect as the company does not want to manage the keys. INCORRECT: “Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)” is incorrect as the company needs to manage access control for the keys which is not possible when they’re managed by Amazon.
A company has some statistical data stored in an Amazon RDS database. The company want to allow users to access this information using an API. A solutions architect must create a solution that allows sporadic access to the data, ranging from no requests to large bursts of traffic. Which solution should the solutions architect suggest?
A. Set up an Amazon API Gateway and use Amazon ECS
B. Set up an Amazon API Gateway and use AWS Elastic Beanstalk
C. Set up an Amazon API Gateway and use AWS Lambda functions
D. Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling
C. Set up an Amazon API Gateway and use AWS Lambda functions
Explanation:
This question is simply asking you to work out the best compute service for the stated requirements. The key requirements are that the compute service should be suitable for a workload that can range quite broadly in demand from no requests to large bursts of traffic. AWS Lambda is an ideal solution as you pay only when requests are made and it can easily scale to accommodate the large bursts in traffic. Lambda works well with both API Gateway and Amazon RDS. CORRECT: “Set up an Amazon API Gateway and use AWS Lambda functions” is the correct answer. INCORRECT: “Set up an Amazon API Gateway and use Amazon ECS” is incorrect INCORRECT: “Set up an Amazon API Gateway and use AWS Elastic Beanstalk” is incorrect INCORRECT: “Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling” is incorrect
A company runs a financial application using an Amazon EC2 Auto Scaling group behind an Application Load Balancer (ALB). When running month-end reports on a specific day and time each month the application becomes unacceptably slow. Amazon CloudWatch metrics show the CPU utilization hitting 100%. What should a solutions architect recommend to ensure the application is able to handle the workload and avoid downtime?
A. Configure an Amazon CloudFront distribution in front of the ALB
B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization C. Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule
D. Configure Amazon ElastiCache to remove some of the workload from the EC2 instances
C. Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule
Explanation:
Scheduled scaling allows you to set your own scaling schedule. In this case the scaling action can be scheduled to occur just prior to the time that the reports will be run each month. Scaling actions are performed automatically as a function of time and date. This will ensure that there are enough EC2 instances to serve the demand and prevent the application from slowing down. CORRECT: “Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule” is the correct answer. INCORRECT: “Configure an Amazon CloudFront distribution in front of the ALB” is incorrect as this would be more suitable for providing access to global users by caching content. INCORRECT: “Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization” is incorrect as this would not prevent the slow-down from occurring as there would be a delay between when the CPU hits 100% and the metric being reported and additional instances being launched. INCORRECT: “Configure Amazon ElastiCache to remove some of the workload from the EC2 instances” is incorrect as ElastiCache is a database cache, it cannot replace the compute functions of an EC2 instance.
A solutions architect is designing a high performance computing (HPC) application using Amazon EC2 Linux instances. All EC2 instances need to communicate to each other with low latency and high throughput network performance. Which EC2 solution BEST meets these requirements?
A. Launch the EC2 instances in a cluster placement group in one Availability Zone B. Launch the EC2 instances in a spread placement group in one Availability Zone C. Launch the EC2 instances in an Auto Scaling group in two Regions. Place a Network Load Balancer in front of the instances
D. Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones
A. Launch the EC2 instances in a cluster placement group in one Availability Zone
Explanation:
When you launch a new EC2 instance, the EC2 service attempts to place the instance in such a way that all of your instances are spread out across underlying hardware to minimize correlated failures. You can useplacement groupsto influence the placement of a group ofinterdependentinstances to meet the needs of your workload. Depending on the type of workload, you can create a placement group using one of the following placement strategies: Cluster– packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications. Partition– spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka. Spread– strictly places a small group of instances across distinct underlying hardware to reduce correlated failures. For this scenario, a cluster placement group should be used as this is the best option for providing low-latency network performance for a HPC application. CORRECT: “Launch the EC2 instances in a cluster placement group in one Availability Zone” is the correct answer. INCORRECT: “Launch the EC2 instances in a spread placement group in one Availability Zone” is incorrect as the spread placement group is used to spread instances across distinct underlying hardware. INCORRECT: “Launch the EC2 instances in an Auto Scaling group in two Regions. Place a Network Load Balancer in front of the instances” is incorrect as this does not achieve the stated requirement to provide low-latency, high throughput network performance between instances. Also, you cannot use an ELB across Regions. INCORRECT: “Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones” is incorrect as this does not reduce network latency or improve performance.
A web application in a three-tier architecture runs on a fleet of Amazon EC2 instances. Performance issues have been reported and investigations point to insufficient swap space. The operations team requires monitoring to determine if this is correct. What should a solutions architect recommend?
A. Configure an Amazon CloudWatch SwapUsage metric dimension. Monitor the SwapUsage dimension in the EC2 metrics in CloudWatch
B. Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics. Monitor SwapUsage metrics in CloudWatch
C. Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch
D. Enable detailed monitoring in the EC2 console. Create an Amazon CloudWatch SwapUtilization custom metric. Monitor SwapUtilization metrics in CloudWatch
C. Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch
Explanation:
You can use the CloudWatch agent to collect both system metrics and log files from Amazon EC2 instances and on-premises servers. The agent supports both Windows Server and Linux, and enables you to select the metrics to be collected, including sub-resource metrics such as per-CPU core. There is now a unified agent and previously there were monitoring scripts. Both of these tools can capture SwapUtilization metrics and send them to CloudWatch. This is the best way to get memory utilization metrics from Amazon EC2 instances. CORRECT: “Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch” is the correct answer. INCORRECT: “Enable detailed monitoring in the EC2 console. Create an Amazon CloudWatch SwapUtilization custom metric. Monitor SwapUtilization metrics in CloudWatch” is incorrect as you do not create custom metrics in the console, you must configure the instances to send the metric information to CloudWatch. INCORRECT: “Configure an Amazon CloudWatch SwapUsage metric dimension. Monitor the SwapUsage dimension in the EC2 metrics in CloudWatch” is incorrect as there is no SwapUsage metric in CloudWatch. All memory metrics must be custom metrics. INCORRECT: “Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics. Monitor SwapUsage metrics in CloudWatch” is incorrect as performance related information is not stored in metadata.
A gaming company collects real-time data and stores it in an on-premises database system. The company are migrating to AWS and need better performance for the database. A solutions architect has been asked to recommend an in-memory database that supports data replication. Which database should a solutions architect recommend?
A. Amazon RDS for MySQL
B. Amazon RDS for PostgreSQL
C. Amazon ElastiCache for Redis
D. Amazon ElastiCache for Memcached
C. Amazon ElastiCache for Redis
Explanation:
Amazon ElastiCache is an in-memory database. With ElastiCache Memcached there is no data replication or high availability. As you can see in the diagram, each node is a separate partition of data: Therefore, the Redis engine must be used which does support both data replication and clustering. The following diagram shows a Redis architecture with cluster mode enabled: CORRECT: “Amazon ElastiCache for Redis” is the correct answer. INCORRECT: “Amazon ElastiCache for Memcached” is incorrect as Memcached does not support data replication or high availability. INCORRECT: “Amazon RDS for MySQL” is incorrect as this is not an in-memory database. INCORRECT: “Amazon RDS for PostgreSQL” is incorrect as this is not an in-memory database.
A company has experienced malicious traffic from some suspicious IP addresses. The security team discovered the requests are from different IP addresses under the same CIDR range. What should a solutions architect recommend to the team?
A. Add a rule in the inbound table of the security group to deny the traffic from that CIDR range
B. Add a rule in the outbound table of the security group to deny the traffic from that CIDR range
C. Add a deny rule in the inbound table of the network ACL with a lower rule number than other rules
D. Add a deny rule in the outbound table of the network ACL with a lower rule number than other rules
C. Add a deny rule in the inbound table of the network ACL with a lower rule number than other rules
Explanation:
You can only create deny rules with network ACLs, it is not possible with security groups. Network ACLs process rules in order from the lowest numbered rules to the highest until they reach and allow or deny. The following table describes some of the differences between security groups and network ACLs: Therefore, the solutions architect should add a deny rule in the inbound table of the network ACL with a lower rule number than other rules. CORRECT: “Add a deny rule in the inbound table of the network ACL with a lower rule number than other rules” is the correct answer. INCORRECT: “Add a deny rule in the outbound table of the network ACL with a lower rule number than other rules” is incorrect as this will only block outbound traffic. INCORRECT: “Add a rule in the inbound table of the security group to deny the traffic from that CIDR range” is incorrect as you cannot create a deny rule with a security group. INCORRECT: “Add a rule in the outbound table of the security group to deny the traffic from that CIDR range” is incorrect as you cannot create a deny rule with a security group.
A solutions architect is designing a microservices architecture. AWS Lambda will store data in an Amazon DynamoDB table named Orders. The solutions architect needs to apply an IAM policy to the Lambda function’s execution role to allow it to put, update, and delete items in the Orders table. No other actions should be allowed. Which of the following code snippets should be included in the IAM policy to fulfill this requirement whilst providing the LEAST privileged access?
A. “Sid”: “PutUpdateDeleteOnOrders”,
“Effect”: “Allow”,
“Action”: [
“dynamodb:PutItem”,
“dynamodb:UpdateItem”,
“dynamodb:DeleteItem” ],
“Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/Orders”
B. “Sid”: “PutUpdateDeleteOnOrders”,
“Effect”:
“Allow”,
“Action”: [
“dynamodb:PutItem”,
“dynamodb:UpdateItem”,
“dynamodb:DeleteItem” ],
“Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/*”
C. “Sid”: “PutUpdateDeleteOnOrders”,
“Effect”: “Allow”,
“Action”: “dynamodb:* “,
“Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/Orders”
D. “Sid”: “PutUpdateDeleteOnOrders”,
“Effect”: “Deny”,
“Action”: “dynamodb:* “,
“Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/Orders”
A. “Sid”: “PutUpdateDeleteOnOrders”,
“Effect”: “Allow”,
“Action”: [
“dynamodb:PutItem”,
“dynamodb:UpdateItem”,
“dynamodb:DeleteItem” ],
“Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/Orders”
Explanation:
The key requirements are to allow the Lambda function the put, update, and delete actions on a single table. Using the principle of least privilege the answer should not allow any other access. CORRECT: The following answer is correct:
“Sid”: “PutUpdateDeleteOnOrders”,
“Effect”: “Allow”,
“Action”: [
“dynamodb:PutItem”,
“dynamodb:UpdateItem”,
“dynamodb:DeleteItem” ],
“Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/Orders”
This code snippet specifies the exact actions to allow and also specified the resource to apply those permissions to. INCORRECT: the following answer is incorrect: “Sid”: “PutUpdateDeleteOnOrders”, “Effect”: “Allow”, “Action”: [ “dynamodb:PutItem”, “dynamodb:UpdateItem”, “dynamodb:DeleteItem” ], “Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/” This code snippet specifies the correct list of actions but it provides a wildcard “” instead of specifying the exact resource. Therefore, the function will be able to put, update, and delete items on any table in the account. INCORRECT: the following answer is incorrect: “Sid”: “PutUpdateDeleteOnOrders”, “Effect”: “Allow”, “Action”: “dynamodb:* “, “Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/Orders” This code snippet allows any action on DynamoDB by using a wildcard “dynamodb:”. This does not follow the principle of least privilege. INCORRECT: the following answer is incorrect: “Sid”: “PutUpdateDeleteOnOrders”, “Effect”: “Deny”, “Action”: “dynamodb: “, “Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/Orders” This code snippet denies any action on the table. This does not have the desired effect.
A company has created a duplicate of its environment in another AWS Region. The application is running in warm standby mode. There is an Application Load Balancer (ALB) in front of the application. Currently, failover is manual and requires updating a DNS alias record to point to the secondary ALB. How can a solutions architect automate the failover process?
A. Enable an ALB health check
B. Enable an Amazon Route 53 health check
C . Create a CNAME record on Amazon Route 53 pointing to the ALB endpoint D. Create a latency based routing policy on Amazon Route 53
B. Enable an Amazon Route 53 health check
Explanation:
You can use Route 53 to check the health of your resources and only return healthy resources in response to DNS queries. There are three types of DNS failover configurations: Active-passive: Route 53 actively returns a primary resource. In case of failure, Route 53 returns the backup resource. Configured using a failover policy. Active-active: Route 53 actively returns more than one resource. In case of failure, Route 53 fails back to the healthy resource. Configured using any routing policy besides failover. Combination: Multiple routing policies (such as latency-based, weighted, etc.) are combined into a tree to configure more complex DNS failover. In this case an alias already exists for the secondary ALB. Therefore, the solutions architect just needs to enable a failover configuration with an Amazon Route 53 health check. The configuration would look something like this: CORRECT: “Enable an Amazon Route 53 health check” is the correct answer. INCORRECT: “Enable an ALB health check” is incorrect. The point of an ALB health check is to identify the health of targets (EC2 instances). It cannot redirect clients to another Region. INCORRECT: “Create a CNAME record on Amazon Route 53 pointing to the ALB endpoint” is incorrect as an Alias record already exists and is better for mapping to an ALB. INCORRECT: “Create a latency based routing policy on Amazon Route 53” is incorrect as this will only take into account latency, it is not used for failover.
An application allows users to upload and download files. Files older than 2 years will be accessed less frequently. A solutions architect needs to ensure that the application can scale to any number of files while maintaining high availability and durability. Which scalable solutions should the solutions architect recommend?
A. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard Infrequent Access (S3 Standard-IA)
B. Store the files on Amazon Elastic File System (EFS) with a lifecycle policy that moves objects older than 2 years to EFS Infrequent Access (EFS IA)
C. Store the files in Amazon Elastic Block Store (EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data older than 2 years D. Store the files in Amazon Elastic Block Store (EBS) volumes. Create a lifecycle policy to move files older than 2 years to Amazon S3 Glacier
A. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard Infrequent Access (S3 Standard-IA)
Explanation:
S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. CORRECT: “Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard Infrequent Access (S3 Standard-IA)” is the correct answer. INCORRECT: “Store the files on Amazon Elastic File System (EFS) with a lifecycle policy that moves objects older than 2 years to EFS Infrequent Access (EFS IA)” is incorrect. With EFS you can transition files to EFS IA after a file has not been accessed for a specified period of time with options up to 90 days. You cannot transition based on an age of 2 years. INCORRECT: “Store the files in Amazon Elastic Block Store (EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data older than 2 years” is incorrect. You cannot identify the age of data and archive snapshots in this way with EBS. INCORRECT: “Store the files in Amazon Elastic Block Store (EBS) volumes. Create a lifecycle policy to move files older than 2 years to Amazon S3 Glacier” is incorrect. You cannot archive files from an EBS volume to Glacier using lifecycle policies.
A company is planning to migrate a large quantity of important data to Amazon S3. The data will be uploaded to a versioning enabled bucket in the us-west-1 Region. The solution needs to include replication of the data to another Region for disaster recovery purposes. How should a solutions architect configure the replication?
A. Create an additional S3 bucket in another Region and configure cross-Region replication
B. Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS)
C. Create an additional S3 bucket with versioning in another Region and configure cross-Region replication
D. Create an additional S3 bucket with versioning in another Region and configure cross-origin resource sharing (CORS)
C. Create an additional S3 bucket with versioning in another Region and configure cross-Region replication
Explanation:
Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can copy objects between different AWS Regions or within the same Region. Both source and destination buckets must have versioning enabled. CORRECT: “Create an additional S3 bucket with versioning in another Region and configure cross-Region replication” is the correct answer. INCORRECT: “Create an additional S3 bucket in another Region and configure cross-Region replication” is incorrect as the destination bucket must also have versioning enabled. INCORRECT: “Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS)” is incorrect as CORS is not related to replication. INCORRECT: “Create an additional S3 bucket with versioning in another Region and configure cross-origin resource sharing (CORS)” is incorrect as CORS is not related to replication.
An application runs on Amazon EC2 instances across multiple Availability Zones. The instances run in an Amazon EC2 Auto Scaling group behind an Application Load Balancer. The application performs best when the CPU utilization of the EC2 instances is at or near 40%. What should a solutions architect do to maintain the desired performance across all instances in the group?
A. Use a simple scaling policy to dynamically scale the Auto Scaling group
B. Use a target tracking policy to dynamically scale the Auto Scaling group
C. Use an AWS Lambda function to update the desired Auto Scaling group capacity
D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group
B. Use a target tracking policy to dynamically scale the Auto Scaling group
Explanation:
With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to the changes in the metric due to a changing load pattern. CORRECT: “Use a target tracking policy to dynamically scale the Auto Scaling group” is the correct answer. INCORRECT: “Use a simple scaling policy to dynamically scale the Auto Scaling group” is incorrect as target tracking is a better way to keep the aggregate CPU usage at around 40% INCORRECT: “Use an AWS Lambda function to update the desired Auto Scaling group capacity” is incorrect as this can be done automatically. INCORRECT: “Use scheduled scaling actions to scale up and scale down the Auto Scaling group” is incorrect as dynamic scaling is required to respond to changes in utilization.
A High Performance Computing (HPC) application needs storage that can provide 135,000 IOPS. The storage layer is replicated across all instances in a cluster. What is the optimal storage solution that provides the required performance and is cost-effective?
A. Use Amazon EBS Provisioned IOPS volume with 135,000 IOPS
B. Use Amazon Instance Store
C. Use Amazon S3 with byte-range fetch
D. Use Amazon EC2 Enhanced Networking with an EBS HDD Throughput Optimized volume
B. Use Amazon Instance Store
Explanation:
Instance stores offer very high performance and low latency. As long as you can afford to lose an instance, i.e. you are replicating your data, these can be a good solution for high performance/low latency requirements. Also, the cost of instance stores is included in the instance charges so it can also be more cost-effective than EBS Provisioned IOPS. CORRECT: “Use Amazon Instance Store” is the correct answer. INCORRECT: “Use Amazon EBS Provisioned IOPS volume with 135,000 IOPS” is incorrect. In the case of a HPC cluster that replicates data between nodes you don’t necessarily need a shared storage solution such as Amazon EBS Provisioned IOPS – this would also be a more expensive solution as the Instance Store is included in the cost of the HPC instance. INCORRECT: “Use Amazon S3 with byte-range fetch” is incorrect. Amazon S3 is not a solution for this HPC application as in this case it will require block-based storage to provide the required IOPS. INCORRECT: “Enhanced networking provides higher bandwidth and lower latency and is implemented using an Elastic Network Adapter (ENA). However, using an ENA with an HDD Throughput Optimized volume is not recommended and the volume will not provide the performance required for this use case.” is incorrect
A high-performance file system is required for a financial modelling application. The data set will be stored on Amazon S3 and the storage solution must have seamless integration so objects can be accessed as files. Which storage solution should be used?
A. Amazon FSx for Windows File Server
B. Amazon FSx for Lustre
C. Amazon Elastic File System (EFS)
D. Amazon Elastic Block Store (EBS)
B. Amazon FSx for Lustre
Explanation:
Amazon FSx for Lustre provides a high-performance file system optimized for fast processing of workloads such as machine learning, high performance computing (HPC), video processing, financial modeling, and electronic design automation (EDA). Amazon FSx works natively with Amazon S3, letting you transparently access your S3 objects as files on Amazon FSx to run analyses for hours to months. CORRECT: “Amazon FSx for Lustre” is the correct answer. INCORRECT: “Amazon FSx for Windows File Server” is incorrect. Amazon FSx for Windows File Server provides a fully managed native Microsoft Windows file system so you can easily move your Windows-based applications that require shared file storage to AWS. This solution integrates with Windows file shares, not with Amazon S3. INCORRECT: “Amazon Elastic File System (EFS)” is incorrect. EFS and EBS are not good use cases for this solution. Neither storage solution is capable of presenting Amazon S3 objects as files to the application. INCORRECT: “Amazon Elastic Block Store (EBS)” is incorrect. EFS and EBS are not good use cases for this solution. Neither storage solution is capable of presenting Amazon S3 objects as files to the application.
An application requires a MySQL database which will only be used several times a week for short periods. The database needs to provide automatic instantiation and scaling. Which database service is most suitable?
A. Amazon RDS MySQL
B. Amazon EC2 instance with MySQL database installed
C. Amazon Aurora
D. Amazon Aurora Serverless
D. Amazon Aurora Serverless
Explanation:
Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora. The database automatically starts up, shuts down, and scales capacity up or down based on application needs. This is an ideal database solution for infrequently-used applications. CORRECT: “Amazon Aurora Serverless” is the correct answer. INCORRECT: “Amazon RDS MySQL” is incorrect as this service requires an instance to be running all the time which is more costly. INCORRECT: “Amazon EC2 instance with MySQL database installed” is incorrect as this service requires an instance to be running all the time which is more costly. INCORRECT: “Amazon Aurora” is incorrect as this service requires an instance to be running all the time which is more costly.
An Architect needs to find a way to automatically and repeatably create many member accounts within an AWS Organization. The accounts also need to be moved into an OU and have VPCs and subnets created. What is the best way to achieve this?
A. Use the AWS Organizations API
B. Use CloudFormation with scripts
C. Use the AWS Management Console
D. Use the AWS CLI
B. Use CloudFormation with scripts
Explanation:
The best solution is to use a combination of scripts and AWS CloudFormation. You will also leverage the AWS Organizations API. This solution can provide all of the requirements. CORRECT: “Use CloudFormation with scripts” is the correct answer. INCORRECT: “Use the AWS Organizations API” is incorrect. You can create member accounts with the AWS Organizations API. However, you cannot use that API to configure the account and create VPCs and subnets. INCORRECT: “Use the AWS Management Console” is incorrect. Using the AWS Management Console is not a method of automatically creating the resources. INCORRECT: “Use the AWS CLI” is incorrect. You can do all tasks using the AWS CLI but it is better to automate the process using AWS CloudFormation.
An organization is extending a secure development environment into AWS. They have already secured the VPC including removing the Internet Gateway and setting up a Direct Connect connection. What else needs to be done to add encryption?
A. Setup a Virtual Private Gateway (VPG)
B. Enable IPSec encryption on the Direct Connect connection
C. Setup the Border Gateway Protocol (BGP) with encryption
D. Configure an AWS Direct Connect Gateway
A. Setup a Virtual Private Gateway (VPG)
Explanation:
A VPG is used to setup an AWS VPN which you can use in combination with Direct Connect to encrypt all data that traverses the Direct Connect link. This combination provides an IPsec-encrypted private connection that also reduces network costs, increases bandwidth throughput, and provides a more consistent network experience than internet-based VPN connections. CORRECT: “Setup a Virtual Private Gateway (VPG)” is the correct answer. INCORRECT: “Enable IPSec encryption on the Direct Connect connection” is incorrect. There is no option to enable IPSec encryption on the Direct Connect connection. INCORRECT: “Setup the Border Gateway Protocol (BGP) with encryption” is incorrect. The BGP protocol is not used to enable encryption for Direct Connect, it is used for routing. INCORRECT: “Configure an AWS Direct Connect Gateway” is incorrect. An AWS Direct Connect Gateway is used to connect to VPCs across multiple AWS regions. It is not involved with encryption.