More Test Questions - 4 Flashcards
A company is deploying an Amazon ElastiCache for Redis cluster. To enhance security a password should be required to access the database. What should the solutions architect use?
1: AWS Directory Service
2: AWS IAM Policy
3: Redis AUTH command
4: VPC Security Group
1: AWS Directory Service
2: AWS IAM Policy
3: Redis AUTH command
4: VPC Security Group
To increase performance and redundancy for an application a company has decided to run multiple implementations in different AWS Regions behind network load balancers. The company currently advertise the application using two public IP addresses from separate /24 address ranges and would prefer not to change these. Users should be directed to the closest available application endpoint. Which actions should a solutions architect take? (Select TWO)
1: Create an Amazon Route 53 geolocation based routing policy
2: Create an AWS Global Accelerator and attach endpoints in each AWS Region
3: Assign new static anycast IP addresses and modify any existing pointers
4: Migrate both public IP addresses to the AWS Global Accelerator
5: Create PTR records to map existing public IP addresses to an Alias 3.
1: Create an Amazon Route 53 geolocation based routing policy
2: Create an AWS Global Accelerator and attach endpoints in each AWS Region
3: Assign new static anycast IP addresses and modify any existing pointers
4: Migrate both public IP addresses to the AWS Global Accelerator
5: Create PTR records to map existing public IP addresses to an Alias 3.
Three Amazon VPCs are used by a company in the same region. The company has two AWS Direct Connect connections to two separate company offices and wishes to share these with all three VPCs. A Solutions Architect has created an AWS Direct Connect gateway. How can the required connectivity be configured?
1: Associate the Direct Connect gateway to a transit gateway
2: Associate the Direct Connect gateway to a virtual private gateway in each VPC
3: Create a VPC peering connection between the VPCs and route entries for the Direct Connect Gateway
4: Create a transit virtual interface between the Direct Connect gateway and each VPC
1: Associate the Direct Connect gateway to a transit gateway
2: Associate the Direct Connect gateway to a virtual private gateway in each VPC
3: Create a VPC peering connection between the VPCs and route entries for the Direct Connect Gateway
4: Create a transit virtual interface between the Direct Connect gateway and each VPC
A retail organization sends coupons out twice a week and this results in a predictable surge in sales traffic. The application runs on Amazon EC2 instances behind an Elastic Load Balancer. The organization is looking for ways to reduce cost without impacting performance or reliability. How can they achieve this goal?
1: Purchase scheduled reserved instances
2: Use a mixture of spot instances and on demand instances
3: Increase the instance size of the existing EC2 instances
4: Purchase Amazon EC2 dedicated hosts
1: Purchase scheduled reserved instances
2: Use a mixture of spot instances and on demand instances
3: Increase the instance size of the existing EC2 instances
4: Purchase Amazon EC2 dedicated hosts
Over 500 TB of data must be analyzed using standard SQL business intelligence tools. The dataset consists of a combination of structured data and unstructured data. The unstructured data is small and stored on Amazon S3. Which AWS services are most suitable for performing analytics on the data?
1: Amazon RDS MariaDB with Amazon Athena
2: Amazon DynamoDB with Amazon DynamoDB Accelerator (DAX)
3: Amazon ElastiCache for Redis with cluster mode enabled
4: Amazon Redshift with Amazon Redshift Spectrum
1: Amazon RDS MariaDB with Amazon Athena
2: Amazon DynamoDB with Amazon DynamoDB Accelerator (DAX)
3: Amazon ElastiCache for Redis with cluster mode enabled
4: Amazon Redshift with Amazon Redshift Spectrum
An application is being monitored using Amazon GuardDuty. A Solutions Architect needs to be notified by email of medium to high severity events. How can this be achieved?
1: Configure an Amazon CloudWatch alarm that triggers based on a GuardDuty metric
2: Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic
3: Create an Amazon CloudWatch Logs rule that triggers an AWS Lambda function
4: Configure an Amazon CloudTrail alarm the triggers based on GuardDuty API activity
1: Configure an Amazon CloudWatch alarm that triggers based on a GuardDuty metric
2: Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic
3: Create an Amazon CloudWatch Logs rule that triggers an AWS Lambda function
4: Configure an Amazon CloudTrail alarm the triggers based on GuardDuty API activity
A company is migrating a decoupled application to AWS. The application uses a message broker based on the MQTT protocol. The application will be migrated to Amazon EC2 instances and the solution for the message broker must not require rewriting application code. Which AWS service can be used for the migrated message broker?
1: Amazon SQS
2: Amazon SNS
3: Amazon MQ
4: AWS Step Functions
1: Amazon SQS
2: Amazon SNS
3: Amazon MQ
4: AWS Step Functions
A HR application stores employment records on Amazon S3. Regulations mandate the records are retained for seven years. Once created the records are accessed infrequently for the first three months and then must be available within 10 minutes if required thereafter. Which lifecycle action meets the requirements whilst MINIMIZING cost?
1: Store the data in S3 Standard for 3 months, then transition to S3 Glacier
2: Store the data in S3 Standard-IA for 3 months, then transition to S3 Glacier
3: Store the data in S3 Standard for 3 months, then transition to S3 Standard-IA
4: Store the data in S3 Intelligent Tiering for 3 months, then transition to S3 Standard-IA
1: Store the data in S3 Standard for 3 months, then transition to S3 Glacier
2: Store the data in S3 Standard-IA for 3 months, then transition to S3 Glacier
3: Store the data in S3 Standard for 3 months, then transition to S3 Standard-IA
4: Store the data in S3 Intelligent Tiering for 3 months, then transition to S3 Standard-IA
A highly elastic application consists of three tiers. The application tier runs in an Auto Scaling group and processes data and writes it to an Amazon RDS MySQL database. The Solutions Architect wants to restrict access to the database tier to only accept traffic from the instances in the application tier. However, instances in the application tier are being constantly launched and terminated. How can the Solutions Architect configure secure access to the database tier?
1: Configure the database security group to allow traffic only from the application security group
2: Configure the database security group to allow traffic only from port 3306
3: Configure a Network ACL on the database subnet to deny all traffic to ports other than 3306
4: Configure a Network ACL on the database subnet to allow all traffic from the application subnet
1: Configure the database security group to allow traffic only from the application security group
2: Configure the database security group to allow traffic only from port 3306
3: Configure a Network ACL on the database subnet to deny all traffic to ports other than 3306
4: Configure a Network ACL on the database subnet to allow all traffic from the application subnet
A Solutions Architect is rearchitecting an application with decoupling. The application will send batches of up to 1000 messages per second that must be received in the correct order by the consumers. Which action should the Solutions Architect take?
1: Create an Amazon SQS Standard queue
2: Create an Amazon SNS topic
3: Create an Amazon SQS FIFO queue
4: Create an AWS Step Functions state machine
1: Create an Amazon SQS Standard queue
2: Create an Amazon SNS topic
3: Create an Amazon SQS FIFO queue
4: Create an AWS Step Functions state machine
A Solutions Architect is designing an application that consists of AWS Lambda and Amazon RDS Aurora MySQL. The Lambda function must use database credentials to authenticate to MySQL and security policy mandates that these credentials must not be stored in the function code. How can the Solutions Architect securely store the database credentials and make them available to the function?
1: Store the credentials in AWS Key Management Service and use environment variables in the function code pointing to KMS
2: Store the credentials in Systems Manager Parameter Store and update the function code and execution role
3: Use the AWSAuthenticationPlugin and associate an IAM user account in the MySQL database
4: Create an IAM policy and store the credentials in the policy. Attach the policy to the Lambda function execution role
1: Store the credentials in AWS Key Management Service and use environment variables in the function code pointing to KMS
2: Store the credentials in Systems Manager Parameter Store and update the function code and execution role
3: Use the AWSAuthenticationPlugin and associate an IAM user account in the MySQL database
4: Create an IAM policy and store the credentials in the policy. Attach the policy to the Lambda function execution role
A company are finalizing their disaster recovery plan. A limited set of core services will be replicated to the DR site ready to seamlessly take over the in the event of a disaster. All other services will be switched off. Which DR strategy is the company using?
1: Backup and restore
2: Pilot light
3: Warm standby
4: Multi-site
1: Backup and restore
2: Pilot light
3: Warm standby
4: Multi-site
An application that runs a computational fluid dynamics workload uses a tightly-coupled HPC architecture that uses the MPI protocol and runs across many nodes. A service-managed deployment is required to minimize operational overhead. Which deployment option is MOST suitable for provisioning and managing the resources required for this use case?
1: Use Amazon EC2 Auto Scaling to deploy instances in multiple subnets
2: Use AWS CloudFormation to deploy a Cluster Placement Group on EC2
3: Use AWS Batch to deploy a multi-node parallel job
4: Use AWS Elastic Beanstalk to provision and manage the EC2 instances
1: Use Amazon EC2 Auto Scaling to deploy instances in multiple subnets
2: Use AWS CloudFormation to deploy a Cluster Placement Group on EC2
3: Use AWS Batch to deploy a multi-node parallel job
4: Use AWS Elastic Beanstalk to provision and manage the EC2 instances
A Solutions Architect is designing an application that will run on an Amazon EC2 instance. The application must asynchronously invoke and AWS Lambda function to analyze thousands of .CSV files. The services should be decoupled. Which service can be used to decouple the compute services?
1: Amazon SQS
2: Amazon SNS
3: Amazon Kinesis
4: Amazon OpsWorks
1: Amazon SQS
2: Amazon SNS
3: Amazon Kinesis
4: Amazon OpsWorks
A large MongoDB database running on-premises must be migrated to Amazon DynamoDB within the next few weeks. The database is too large to migrate over the company’s limited internet bandwidth so an alternative solution must be used. What should a Solutions Architect recommend?
1: Setup an AWS Direct Connect and migrate the database to Amazon DynamoDB using the AWS Database Migration Service (DMS)
2: Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB
3: Enable compression on the MongoDB database and use the AWS Database Migration Service (DMS) to directly migrate the database to Amazon DynamoDB
4: Use the AWS Database Migration Service (DMS) to extract and load the data to an AWS Snowball Edge device. Complete the migration to Amazon DynamoDB using AWS DMS in the AWS Cloud
1: Setup an AWS Direct Connect and migrate the database to Amazon DynamoDB using the AWS Database Migration Service (DMS)
2: Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB
3: Enable compression on the MongoDB database and use the AWS Database Migration Service (DMS) to directly migrate the database to Amazon DynamoDB
4: Use the AWS Database Migration Service (DMS) to extract and load the data to an AWS Snowball Edge device. Complete the migration to Amazon DynamoDB using AWS DMS in the AWS Cloud
Every time an item in an Amazon DynamoDB table is modified a record must be retained for compliance reasons. What is the most efficient solution to recording this information?
1: Enable Amazon CloudWatch Logs. Configure an AWS Lambda function to monitor the log files and record deleted item data to an Amazon S3 bucket
2: Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and record the modified item data to an Amazon S3 bucket
3: Enable Amazon CloudTrail. Configure an Amazon EC2 instance to monitor activity in the CloudTrail log files and record changed items in another DynamoDB table
4: Enable DynamoDB Global Tables. Enable DynamoDB streams on the multi-region table and save the output directly to an Amazon S3 bucket
1: Enable Amazon CloudWatch Logs. Configure an AWS Lambda function to monitor the log files and record deleted item data to an Amazon S3 bucket
2: Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and record the modified item data to an Amazon S3 bucket
3: Enable Amazon CloudTrail. Configure an Amazon EC2 instance to monitor activity in the CloudTrail log files and record changed items in another DynamoDB table
4: Enable DynamoDB Global Tables. Enable DynamoDB streams on the multi-region table and save the output directly to an Amazon S3 bucket
An application in a private subnet needs to query data in an Amazon DynamoDB table. Use of the DynamoDB public endpoints must be avoided. What is the most EFFICIENT and secure method of enabling access to the table?
1: Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI)
2: Create a gateway VPC endpoint and add an entry to the route table
3: Create a private Amazon DynamoDB endpoint and connect to it using an AWS VPN
4: Create a software VPN between DynamoDB and the application in the private subnet
1: Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI)
2: Create a gateway VPC endpoint and add an entry to the route table
3: Create a private Amazon DynamoDB endpoint and connect to it using an AWS VPN
4: Create a software VPN between DynamoDB and the application in the private subnet
A Solutions Architect needs to select a low-cost, short-term option for adding resilience to an AWS Direct Connect connection. What is the MOST cost-effective solution to provide a backup for the Direct Connect connection?
1: Implement a second AWS Direct Connection
2: Implement an IPSec VPN connection and use the same BGP prefix
3: Configure AWS Transit Gateway with an IPSec VPN backup
4: Configure an IPSec VPN connection over the Direct Connect link
1: Implement a second AWS Direct Connection
2: Implement an IPSec VPN connection and use the same BGP prefix
3: Configure AWS Transit Gateway with an IPSec VPN backup
4: Configure an IPSec VPN connection over the Direct Connect link
The disk configuration for an Amazon EC2 instance must be finalized. The instance will be running an application that requires heavy read/ write IOPS. A single volume is required that is 500 GiB in size and needs to support 20,000 IOPS. What EBS volume type should be selected?
1: EBS General Purpose SSD
2: EBS Provisioned IOPS SSD
3: EBS General Purpose SSD in a RAID 1 configuration
4: EBS Throughput Optimized HDD
1: EBS General Purpose SSD
2: EBS Provisioned IOPS SSD
3: EBS General Purpose SSD in a RAID 1 configuration
4: EBS Throughput Optimized HDD
A new application you are designing will store data in an Amazon Aurora MySQL DB. You are looking for a way to enable inter-region disaster recovery capabilities with fast replication and fast failover. Which of the following options is the BEST solution?
1: Use Amazon Aurora Global Database
2: Enable Multi-AZ for the Aurora DB
3: Create an EBS backup of the Aurora volumes and use cross-region replication to copy the snapshot
4: Create a cross-region Aurora Read Replica
1: Use Amazon Aurora Global Database
2: Enable Multi-AZ for the Aurora DB
3: Create an EBS backup of the Aurora volumes and use cross-region replication to copy the snapshot
4: Create a cross-region Aurora Read Replica
A Solutions Architect regularly launches EC2 instances manually from the console and wants to streamline the process to reduce administrative overhead. Which feature of EC2 enables storing of settings such as AMI ID, instance type, key pairs and Security Groups?
1: Placement Groups
2: Launch Templates
3: Run Command
4: Launch Configurations
1: Placement Groups
2: Launch Templates
3: Run Command
4: Launch Configurations
You recently noticed that your Network Load Balancer (NLB) in one of your VPCs is not distributing traffic evenly between EC2 instances in your AZs. There are an odd number of EC2 instances spread across two AZs. The NLB is configured with a TCP listener on port 80 and is using active health checks. What is the most likely problem?
1: There is no HTTP listener
2: Health checks are failing in one AZ due to latency
3: NLB can only load balance within a single AZ
4: Cross-zone load balancing is disabled
1: There is no HTTP listener
2: Health checks are failing in one AZ due to latency
3: NLB can only load balance within a single AZ
4: Cross-zone load balancing is disabled
A Solutions Architect is creating a design for a multi-tiered serverless application. Which two services form the application facing services from the AWS serverless infrastructure? (Select TWO)
1: API Gateway
2: AWS Cognito
3: AWS Lambda
4: Amazon ECS
5: Elastic Load Balancer
1: API Gateway
2: AWS Cognito
3: AWS Lambda
4: Amazon ECS
5: Elastic Load Balancer
A Solutions Architect attempted to restart a stopped EC2 instance and it immediately changed from a pending state to a terminated state. What are the most likely explanations? (Select TWO)
1: You’ve reached your EBS volume limit
2: An EBS snapshot is corrupt
3: AWS does not currently have enough available On-Demand capacity to service your request
4: You have reached the limit on the number of instances that you can launch in a region
5: The AMI is unsupported
1: You’ve reached your EBS volume limit
2: An EBS snapshot is corrupt
3: AWS does not currently have enough available On-Demand capacity to service your request
4: You have reached the limit on the number of instances that you can launch in a region
5: The AMI is unsupported
One of the applications you manage on RDS uses the MySQL DB and has been suffering from performance issues. You would like to setup a reporting process that will perform queries on the database but you’re concerned that the extra load will further impact the performance of the DB and may lead to poor customer experience. What would be the best course of action to take so you can implement the reporting process?
1: Configure Multi-AZ to setup a secondary database instance in another region
2: Deploy a Read Replica to setup a secondary read-only database instance
3: Deploy a Read Replica to setup a secondary read and write database instance
4: Configure Multi-AZ to setup a secondary database instance in another Availability Zone
1: Configure Multi-AZ to setup a secondary database instance in another region
2: Deploy a Read Replica to setup a secondary read-only database instance
3: Deploy a Read Replica to setup a secondary read and write database instance
4: Configure Multi-AZ to setup a secondary database instance in another Availability Zone
A Solutions Architect is building a new Amazon Elastic Container Service (ECS) cluster. The ECS instances are running the EC2 launch type and load balancing is required to distribute connections to the tasks. It is required that the mapping of ports is performed dynamically and connections are routed to different groups of servers based on the path in the URL. Which AWS service should the Solutions Architect choose to fulfil these requirements?
1: An Amazon ECS Service
2: Application Load Balancer
3: Network Load Balancer
4: Classic Load Balancer
1: An Amazon ECS Service
2: Application Load Balancer
3: Network Load Balancer
4: Classic Load Balancer