More Test Questions - 3 Flashcards
A security officer requires that access to company financial reports is logged. The reports are stored in an Amazon S3 bucket. Additionally, any modifications to the log files must be detected. Which actions should a solutions architect take?
1: Use S3 server access logging on the bucket that houses the reports with the read and write data events and the log file validation options enabled
2: Use S3 server access logging on the bucket that houses the reports with the read and write management events and log file validation options enabled
3: Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation
4: Use AWS CloudTrail to create a new trail. Configure the trail to log read and write management events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation
1: Use S3 server access logging on the bucket that houses the reports with the read and write data events and the log file validation options enabled
2: Use S3 server access logging on the bucket that houses the reports with the read and write management events and log file validation options enabled
3: Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation
4: Use AWS CloudTrail to create a new trail. Configure the trail to log read and write management events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation
A company operates a production web application that uses an Amazon RDS MySQL database. The database has automated, non-encrypted daily backups. To increase the security of the data, it has been recommended that encryption should be enabled for backups. Unencrypted backups will be destroyed after the first encrypted backup has been completed. What should be done to enable encryption for future backups?
1: Enable default encryption for the Amazon S3 bucket where backups are stored
2: Modify the backup section of the database configuration to toggle the Enable encryption check box
3: Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot
4: Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary. Remove the original database instance
1: Enable default encryption for the Amazon S3 bucket where backups are stored
2: Modify the backup section of the database configuration to toggle the Enable encryption check box
3: Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot
4: Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary. Remove the original database instance
A company has deployed an API in a VPC behind an internal Application Load Balancer (ALB). An application that consumes the API as a client is deployed in a second account in private subnets. Which architectural configurations will allow the API to be consumed without using the public Internet? (Select TWO)
1: Configure a VPC peering connection between the two VPCs. Access the API using the private address
2: Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address
3: Configure a ClassicLink connection for the API into the client VPC. Access the API using the ClassicLink address
4: Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address
5: Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address
1: Configure a VPC peering connection between the two VPCs. Access the API using the private address
2: Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address
3: Configure a ClassicLink connection for the API into the client VPC. Access the API using the ClassicLink address
4: Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address
5: Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address
An application runs on Amazon EC2 Linux instances. The application generates log files which are written using standard API calls. A storage solution is required that can be used to store the files indefinitely and must allow concurrent access to all files. Which storage service meets these requirements and is the MOST cost-effective?
1: Amazon EBS
2: Amazon EFS
3: Amazon EC2 instance store
4: Amazon S3
1: Amazon EBS
2: Amazon EFS
3: Amazon EC2 instance store
4: Amazon S3
A production application runs on an Amazon RDS MySQL DB instance. A solutions architect is building a new reporting tool that will access the same data. The reporting tool must be highly available and not impact the performance of the production application. How can this be achieved?
1: Create a cross-region Multi-AZ deployment and create a read replica in the second region
2: Create a Multi-AZ RDS Read Replica of the production RDS DB instance
3: Use Amazon Data Lifecycle Manager to automatically create and manage snapshots
4: Create a Single-AZ RDS Read Replica of the production RDS DB instance. Create a second Single-AZ RDS Read Replica from the replica
1: Create a cross-region Multi-AZ deployment and create a read replica in the second region
2: Create a Multi-AZ RDS Read Replica of the production RDS DB instance
3: Use Amazon Data Lifecycle Manager to automatically create and manage snapshots
4: Create a Single-AZ RDS Read Replica of the production RDS DB instance. Create a second Single-AZ RDS Read Replica from the replica
An online store uses an Amazon Aurora database. The database is deployed as a Multi-AZ deployment. Recently, metrics have shown that database read requests are high and causing performance issues which result in latency for write requests. What should the solutions architect do to separate the read requests from the write requests?
1: Enable read through caching on the Amazon Aurora database
2: Update the application to read from the Multi-AZ standby instance
3: Create a read replica and modify the application to use the appropriate endpoint
4: Create a second Amazon Aurora database and link it to the primary database as a read replica
1: Enable read through caching on the Amazon Aurora database
2: Update the application to read from the Multi-AZ standby instance
3: Create a read replica and modify the application to use the appropriate endpoint
4: Create a second Amazon Aurora database and link it to the primary database as a read replica
An application is deployed on multiple AWS regions and accessed from around the world. The application exposes static public IP addresses. Some users are experiencing poor performance when accessing the application over the Internet. What should a solutions architect recommend to reduce internet latency?
1: Set up AWS Global Accelerator and add endpoints
2: Set up AWS Direct Connect locations in multiple Regions
3: Set up an Amazon CloudFront distribution to access an application
4: Set up an Amazon Route 53 geoproximity routing policy to route traffic
1: Set up AWS Global Accelerator and add endpoints
2: Set up AWS Direct Connect locations in multiple Regions
3: Set up an Amazon CloudFront distribution to access an application
4: Set up an Amazon Route 53 geoproximity routing policy to route traffic
A new application will be launched on an Amazon EC2 instance with an Elastic Block Store (EBS) volume. A solutions architect needs to determine the most cost-effective storage option. The application will have infrequent usage, with peaks of traffic for a couple of hours in the morning and evening. Disk I/O is variable with peaks of up to 3,000 IOPS. Which solution should the solutions architect recommend?
1: Amazon EBS Cold HDD (sc1)
2: Amazon EBS General Purpose SSD (gp2)
3: Amazon EBS Provisioned IOPS SSD (io1)
4: Amazon EBS Throughput Optimized HDD (st1)
1: Amazon EBS Cold HDD (sc1)
2: Amazon EBS General Purpose SSD (gp2)
3: Amazon EBS Provisioned IOPS SSD (io1)
4: Amazon EBS Throughput Optimized HDD (st1)
A security team wants to limit access to specific services or actions in all of the team’s AWS accounts. All accounts belong to a large organization in AWS Organizations. The solution must be scalable and there must be a single point where permissions can be maintained. What should a solutions architect do to accomplish this?
1: Configure an Amazon CloudFront distribution in front of the ALB
2: Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization
3: Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule
4: Configure Amazon ElastiCache to remove some of the workload from the EC2 instances
1: Configure an Amazon CloudFront distribution in front of the ALB
2: Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization
3: Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule
4: Configure Amazon ElastiCache to remove some of the workload from the EC2 instances
A company is planning to use Amazon S3 to store documents uploaded by its customers. The images must be encrypted at rest in Amazon S3. The company does not want to spend time managing and rotating the keys, but it does want to control who can access those keys. What should a solutions architect use to accomplish this?
1: Server-Side Encryption with keys stored in an S3 bucket
2: Server-Side Encryption with Customer-Provided Keys (SSE-C)
3: Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
4: Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
1: Server-Side Encryption with keys stored in an S3 bucket
2: Server-Side Encryption with Customer-Provided Keys (SSE-C)
3: Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
4: Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
A company has some statistical data stored in an Amazon RDS database. The company want to allow users to access this information using an API. A solutions architect must create a solution that allows sporadic access to the data, ranging from no requests to large bursts of traffic. Which solution should the solutions architect suggest?
1: Set up an Amazon API Gateway and use Amazon ECS
2: Set up an Amazon API Gateway and use AWS Elastic Beanstalk
3: Set up an Amazon API Gateway and use AWS Lambda functions
4: Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling
1: Set up an Amazon API Gateway and use Amazon ECS
2: Set up an Amazon API Gateway and use AWS Elastic Beanstalk
3: Set up an Amazon API Gateway and use AWS Lambda functions
4: Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling
A company runs a financial application using an Amazon EC2 Auto Scaling group behind an Application Load Balancer (ALB). When running month-end reports on a specific day and time each month the application becomes unacceptably slow. Amazon CloudWatch metrics show the CPU utilization hitting 100%. What should a solutions architect recommend to ensure the application is able to handle the workload and avoid downtime?
1: Configure an Amazon CloudFront distribution in front of the ALB
2: Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization
3: Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule
4: Configure Amazon ElastiCache to remove some of the workload from the EC2 instances
1: Configure an Amazon CloudFront distribution in front of the ALB
2: Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization
3: Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule
4: Configure Amazon ElastiCache to remove some of the workload from the EC2 instances
A solutions architect is designing a high performance computing (HPC) application using Amazon EC2 Linux instances. All EC2 instances need to communicate to each other with low latency and high throughput network performance. Which EC2 solution BEST meets these requirements?
1: Launch the EC2 instances in a cluster placement group in one Availability Zone
2: Launch the EC2 instances in a spread placement group in one Availability Zone
3: Launch the EC2 instances in an Auto Scaling group in two Regions. Place a Network Load Balancer in front of the instances
4: Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones
1: Launch the EC2 instances in a cluster placement group in one Availability Zone
2: Launch the EC2 instances in a spread placement group in one Availability Zone
3: Launch the EC2 instances in an Auto Scaling group in two Regions. Place a Network Load Balancer in front of the instances
4: Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones
A web application in a three-tier architecture runs on a fleet of Amazon EC2 instances. Performance issues have been reported and investigations point to insufficient swap space. The operations team requires monitoring to determine if this is correct. What should a solutions architect recommend?
1: Configure an Amazon CloudWatch SwapUsage metric dimension. Monitor the SwapUsage dimension in the EC2 metrics in CloudWatch
2: Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics. Monitor SwapUsage metrics in CloudWatch
3: Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch
4: Enable detailed monitoring in the EC2 console. Create an Amazon CloudWatch SwapUtilization custom metric. Monitor SwapUtilization metrics in CloudWatch
1: Configure an Amazon CloudWatch SwapUsage metric dimension. Monitor the SwapUsage dimension in the EC2 metrics in CloudWatch
2: Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics. Monitor SwapUsage metrics in CloudWatch
3: Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch
4: Enable detailed monitoring in the EC2 console. Create an Amazon CloudWatch SwapUtilization custom metric. Monitor SwapUtilization metrics in CloudWatch
A gaming company collects real-time data and stores it in an on-premises database system. The company are migrating to AWS and need better performance for the database. A solutions architect has been asked to recommend an in-memory database that supports data replication. Which database should a solutions architect recommend?
1: Amazon RDS for MySQL
2: Amazon RDS for PostgreSQL
3: Amazon ElastiCache for Redis
4: Amazon ElastiCache for Memcached
1: Amazon RDS for MySQL
2: Amazon RDS for PostgreSQL
3: Amazon ElastiCache for Redis
4: Amazon ElastiCache for Memcached
A company has experienced malicious traffic from some suspicious IP addresses. The security team discovered the requests are from different IP addresses under the same CIDR range. What should a solutions architect recommend to the team?
1: Add a rule in the inbound table of the security group to deny the traffic from that CIDR range
2: Add a rule in the outbound table of the security group to deny the traffic from that CIDR range
3: Add a deny rule in the inbound table of the network ACL with a lower rule number than other rules
4: Add a deny rule in the outbound table of the network ACL with a lower rule number than other rules
1: Add a rule in the inbound table of the security group to deny the traffic from that CIDR range
2: Add a rule in the outbound table of the security group to deny the traffic from that CIDR range
3: Add a deny rule in the inbound table of the network ACL with a lower rule number than other rules
4: Add a deny rule in the outbound table of the network ACL with a lower rule number than other rules
A solutions architect is designing a microservices architecture. AWS Lambda will store data in an Amazon DynamoDB table named Orders.
The solutions architect needs to apply an IAM policy to the Lambda function’s execution role to allow it to put, update, and delete items in the Orders table. No other actions should be allowed. Which of the following code snippets should be included in the IAM policy to fulfill this requirement whilst providing the LEAST privileged access?
1: “Sid”: “PutUpdateDeleteOnOrders”, “Effect”: “Allow”, “Action”: [“dynamodb:PutItem”, “dynamodb:UpdateItem”, “dynamodb:DeleteItem”], “Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/Orders”
2: “Sid”: “PutUpdateDeleteOnOrders”, “Effect”: “Allow”, “Action”: [“dynamodb:PutItem”, “dynamodb:UpdateItem”, “dynamodb:DeleteItem”], “Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/*”
3: “Sid”: “PutUpdateDeleteOnOrders”, “Effect”: “Allow”, “Action”: “dynamodb:* “, “Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/Orders”
4: “Sid”: “PutUpdateDeleteOnOrders”, “Effect”: “Deny”, “Action”: “dynamodb:* “, “Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/Orders”
1: “Sid”: “PutUpdateDeleteOnOrders”, “Effect”: “Allow”, “Action”: [“dynamodb:PutItem”, “dynamodb:UpdateItem”, “dynamodb:DeleteItem”], “Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/Orders”
2: “Sid”: “PutUpdateDeleteOnOrders”, “Effect”: “Allow”, “Action”: [“dynamodb:PutItem”, “dynamodb:UpdateItem”, “dynamodb:DeleteItem”], “Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/*”
3: “Sid”: “PutUpdateDeleteOnOrders”, “Effect”: “Allow”, “Action”: “dynamodb:* “, “Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/Orders”
4: “Sid”: “PutUpdateDeleteOnOrders”, “Effect”: “Deny”, “Action”: “dynamodb:* “, “Resource”: “arn:aws:dynamodb:us-east-1:227392126428:table/Orders”
A company has created a duplicate of its environment in another AWS Region. The application is running in warm standby mode. There is an Application Load Balancer (ALB) in front of the application. Currently, failover is manual and requires updating a DNS alias record to point to the secondary ALB. How can a solutions architect automate the failover process?
1: Enable an ALB health check
2: Enable an Amazon Route 53 health check
3: Create a CNAME record on Amazon Route 53 pointing to the ALB endpoint
4: Create a latency based routing policy on Amazon Route 53
1: Enable an ALB health check
2: Enable an Amazon Route 53 health check
3: Create a CNAME record on Amazon Route 53 pointing to the ALB endpoint
4: Create a latency based routing policy on Amazon Route 53
An application allows users to upload and download files. Files older than 2 years will be accessed less frequently. A solutions architect needs to ensure that the application can scale to any number of files while maintaining high availability and durability. Which scalable solutions should the solutions architect recommend?
1: Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard Infrequent Access (S3 Standard-IA)
2: Store the files on Amazon Elastic File System (EFS) with a lifecycle policy that moves objects older than 2 years to EFS Infrequent Access (EFS IA)
3: Store the files in Amazon Elastic Block Store (EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data older than 2 years
4: Store the files in Amazon Elastic Block Store (EBS) volumes. Create a lifecycle policy to move files older than 2 years to Amazon S3 Glacier
1: Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard Infrequent Access (S3 Standard-IA)
2: Store the files on Amazon Elastic File System (EFS) with a lifecycle policy that moves objects older than 2 years to EFS Infrequent Access (EFS IA)
3: Store the files in Amazon Elastic Block Store (EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data older than 2 years
4: Store the files in Amazon Elastic Block Store (EBS) volumes. Create a lifecycle policy to move files older than 2 years to Amazon S3 Glacier
A company is planning to migrate a large quantity of important data to Amazon S3. The data will be uploaded to a versioning enabled bucket in the us-west-1 Region. The solution needs to include replication of the data to another Region for disaster recovery purposes. How should a solutions architect configure the replication?
1: Create an additional S3 bucket in another Region and configure cross-Region replication
2: Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS)
3: Create an additional S3 bucket with versioning in another Region and configure cross-Region replication
4: Create an additional S3 bucket with versioning in another Region and configure cross-origin resource sharing (CORS)
1: Create an additional S3 bucket in another Region and configure cross-Region replication
2: Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS)
3: Create an additional S3 bucket with versioning in another Region and configure cross-Region replication
4: Create an additional S3 bucket with versioning in another Region and configure cross-origin resource sharing (CORS)
An application runs on Amazon EC2 instances across multiple Availability Zones. The instances run in an Amazon EC2 Auto Scaling group behind an Application Load Balancer. The application performs best when the CPU utilization of the EC2 instances is at or near 40%. What should a solutions architect do to maintain the desired performance across all instances in the group?
1: Use a simple scaling policy to dynamically scale the Auto Scaling group
2: Use a target tracking policy to dynamically scale the Auto Scaling group
3: Use an AWS Lambda function to update the desired Auto Scaling group capacity
4: Use scheduled scaling actions to scale up and scale down the Auto Scaling group
1: Use a simple scaling policy to dynamically scale the Auto Scaling group
2: Use a target tracking policy to dynamically scale the Auto Scaling group
3: Use an AWS Lambda function to update the desired Auto Scaling group capacity
4: Use scheduled scaling actions to scale up and scale down the Auto Scaling group
A High Performance Computing (HPC) application needs storage that can provide 135,000 IOPS. The storage layer is replicated across all instances in a cluster. What is the optimal storage solution that provides the required performance and is cost-effective?
1: Use Amazon EBS Provisioned IOPS volume with 135,000 IOPS
2: Use Amazon Instance Store
3: Use Amazon S3 with byte-range fetch
4: Use Amazon EC2 Enhanced Networking with an EBS HDD Throughput Optimized volume
1: Use Amazon EBS Provisioned IOPS volume with 135,000 IOPS
2: Use Amazon Instance Store
3: Use Amazon S3 with byte-range fetch
4: Use Amazon EC2 Enhanced Networking with an EBS HDD Throughput Optimized volume
A high-performance file system is required for a financial modelling application. The data set will be stored on Amazon S3 and the storage solution must have seamless integration so objects can be accessed as files. Which storage solution should be used?
1: Amazon FSx for Windows File Server
2: Amazon FSx for Lustre
3: Amazon Elastic File System (EFS)
4: Amazon Elastic Block Store (EBS)
1: Amazon FSx for Windows File Server
2: Amazon FSx for Lustre
3: Amazon Elastic File System (EFS)
4: Amazon Elastic Block Store (EBS)
An application requires a MySQL database which will only be used several times a week for short periods. The database needs to provide automatic instantiation and scaling. Which database service is most suitable?
1: Amazon RDS MySQL
2: Amazon EC2 instance with MySQL database installed
3: Amazon Aurora
4: Amazon Aurora Serverless
1: Amazon RDS MySQL
2: Amazon EC2 instance with MySQL database installed
3: Amazon Aurora
4: Amazon Aurora Serverless
An Architect needs to find a way to automatically and repeatably create many member accounts within an AWS Organization. The accounts also need to be moved into an OU and have VPCs and subnets created. What is the best way to achieve this?
1: Use the AWS Organizations API
2: Use CloudFormation with scripts
3: Use the AWS Management Console
4: Use the AWS CLI
1: Use the AWS Organizations API
2: Use CloudFormation with scripts
3: Use the AWS Management Console
4: Use the AWS CLI
An organization is extending a secure development environment into AWS. They have already secured the VPC including removing the Internet Gateway and setting up a Direct Connect connection. What else needs to be done to add encryption?
1: Setup a Virtual Private Gateway (VPG)
2: Enable IPSec encryption on the Direct Connect connection
3: Setup the Border Gateway Protocol (BGP) with encryption
4: Configure an AWS Direct Connect Gateway
1: Setup a Virtual Private Gateway (VPG)
2: Enable IPSec encryption on the Direct Connect connection
3: Setup the Border Gateway Protocol (BGP) with encryption
4: Configure an AWS Direct Connect Gateway