More Test Questions - 1 Flashcards
An application is being created that will use Amazon EC2 instances to generate and store data. Another set of EC2 instances will then analyze and modify the data. Storage requirements will be significant and will continue to grow over time. The application architects require a storage solution. Which actions would meet these needs?
- 1: Store the data in an Amazon EBS volume. Mount the EBS volume on the application instances
- 2: Store the data in an Amazon EFS filesystem. Mount the file system on the application instances
- 3: Store the data in Amazon S3 Glacier. Update the vault policy to allow access to the application instances
- 4: Store the data in AWS Storage Gateway. Setup AWS Direct Connect between the Gateway appliance and the EC2 instances
- 1: Store the data in an Amazon EBS volume. Mount the EBS volume on the application instances
- 2: Store the data in an Amazon EFS filesystem. Mount the file system on the application instances
- 3: Store the data in Amazon S3 Glacier. Update the vault policy to allow access to the application instances
- 4: Store the data in AWS Storage Gateway. Setup AWS Direct Connect between the Gateway appliance and the EC2 instances
A company hosts a multiplayer game on AWS. The application uses Amazon EC2 instances in a single Availability Zone and users connect over Layer 4. Solutions Architect has been tasked with making the architecture highly available and also more cost-effective. How can the solutions architect best meet these requirements? (Select TWO)
- 1: Configure an Auto Scaling group to add or remove instances in the Availability Zone automatically
- 2: Increase the number of instances and use smaller EC2 instance types
- 3: Configure a Network Load Balancer in front of the EC2 instances
- 4: Configure an Application Load Balancer in front of the EC2 instances
- Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically
- Configure an Auto Scaling group to add or remove instances in the Availability Zone automatically
- 2: Increase the number of instances and use smaller EC2 instance types
- 3: Configure a Network Load Balancer in front of the EC2 instances
- 4: Configure an Application Load Balancer in front of the EC2 instances
- Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically
A company delivers content to subscribers distributed globally from an application running on AWS. The application uses a fleet of Amazon EC2 instance in a private subnet behind an Application Load Balancer (ALB). Due to an update in copyright restrictions, it is necessary to block access for specific countries. What is the EASIEST method to meet this requirement?
- 1: Modify the ALB security group to deny incoming traffic from blocked countries
- 2: Modify the security group for EC2 instances to deny incoming traffic from blocked countries
- 3: Use Amazon CloudFront to serve the application and deny access to blocked countries
- 4: Use a network ACL to block the IP address ranges associated with the specific countries
- 1: Modify the ALB security group to deny incoming traffic from blocked countries
- 2: Modify the security group for EC2 instances to deny incoming traffic from blocked countries
- 3: Use Amazon CloudFront to serve the application and deny access to blocked countries
- 4: Use a network ACL to block the IP address ranges associated with the specific countries
A company stores important data in an Amazon S3 bucket. A solutions architect needs to ensure that data can be recovered in case of accidental deletion. Which action will accomplish this?
- 1: Enable Amazon S3 versioning
- 2: Enable Amazon S3 Intelligent-Tiering
- 3: Enable an Amazon S3 lifecycle policy
- 4: Enable Amazon S3 cross-Region replication
- 1: Enable Amazon S3 versioning
- 2: Enable Amazon S3 Intelligent-Tiering
- 3: Enable an Amazon S3 lifecycle policy
- 4: Enable Amazon S3 cross-Region replication
A company is migrating from an on-premises infrastructure to the AWS Cloud. One of the company’s applications stores files on a Windows file server farm that uses Distributed File System Replication (DFSR) to keep data in sync. A solutions architect needs to replace the file server farm. Which service should the solutions architect use?
- 1: Amazon EFS
- 2: Amazon FSx
- 3: Amazon S3
- 4: AWS Storage Gateway
- 1: Amazon EFS
- 2: Amazon FSx
- 3: Amazon S3
- 4: AWS Storage Gateway
A website runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB) which serves as an origin for an Amazon CloudFront distribution. An AWS WAF is being used to protect against SQL injection attacks. A review of security logs revealed an external malicious IP that needs to be blocked from accessing the website. What should a solutions architect do to protect the application?
- 1: Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address
- 2: Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address
- 3: Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP address
- 4: Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP address
- 1: Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address
- 2: Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address
- 3: Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP address
- 4: Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP address
An ecommerce website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The application is stateless and elastic and scales from a minimum of 10 instances, up to a maximum of 200 instances. For at least 80% of the time at least 40 instances are required. Which solution should be used to minimize costs?
- 1: Purchase Reserved Instances to cover 200 instances
- 2: Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances
- 3: Purchase On-Demand Instances to cover 40 instances. Use Spot Instances to cover the remaining instances
- Purchase Reserved Instances to cover 40 instances. Use On-Demand and Spot Instances to cover the remaining instances
- 1: Purchase Reserved Instances to cover 200 instances
- 2: Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances
- 3: Purchase On-Demand Instances to cover 40 instances. Use Spot Instances to cover the remaining instances
- Purchase Reserved Instances to cover 40 instances. Use On-Demand and Spot Instances to cover the remaining instances
A solutions architect is creating a system that will run analytics on financial data for 4 hours a night, 5 days a week. The analysis is expected to run for the same duration and cannot be interrupted once it is started. The system will be required for a minimum of 1 year. Which type of Amazon EC2 instances should be used to reduce the cost of the system?
- 1: Spot Instances
- 2: On-Demand Instances
- 3: Standard Reserved Instances
- 4: Scheduled Reserved Instances
- 1: Spot Instances
- 2: On-Demand Instances
- 3: Standard Reserved Instances
- 4: Scheduled Reserved
A solutions architect needs to backup some application log files from an online ecommerce store to Amazon S3. It is unknown how often the logs will be accessed or which logs will be accessed the most. The solutions architect must keep costs as low as possible by using the appropriate S3 storage class. Which S3 storage class should be implemented to meet these requirements?
- 1: S3 Glacier
- 2: S3 Intelligent-Tiering
- 3: S3 Standard-Infrequent Access (S3 Standard-IA)
- 4: S3 One Zone-Infrequent Access (S3 One Zone-IA)
- 1: S3 Glacier
- 2: S3 Intelligent-Tiering
- 3: S3 Standard-Infrequent Access (S3 Standard-IA)
- 4: S3 One Zone-Infrequent Access (S3 One Zone-IA)
A solutions architect is designing a new service that will use an Amazon API Gateway API on the frontend. The service will need to persist data in a backend database using key-value requests. Initially, the data requirements will be around 1 GB and future growth is unknown. Requests can range from 0 to over 800 requests per second. Which combination of AWS services would meet these requirements? (Select TWO)
- 1: AWS Fargate
- 2: AWS Lambda
- Amazon DynamoDB
- 4: Amazon EC2 Auto Scaling 5: Amazon RDS
- 1: AWS Fargate
- 2: AWS Lambda
- Amazon DynamoDB
- 4: Amazon EC2 Auto Scaling 5: Amazon RDS
A company’s application is running on Amazon EC2 instances in a single Region. In the event of a disaster, a solutions architect needs to ensure that the resources can also be deployed to a second Region. Which combination of actions should the solutions architect take to accomplish this? (Select TWO)
- 1: Detach a volume on an EC2 instance and copy it to an Amazon S3 bucket in the second Region
- 2: Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region
- 3: Launch a new EC2 instance in the second Region and copy a volume from Amazon S3 to the new instance
- 4: Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region for the destination
- 5: Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the second Region using that EBS volume
- 1: Detach a volume on an EC2 instance and copy it to an Amazon S3 bucket in the second Region
- 2: Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region
- 3: Launch a new EC2 instance in the second Region and copy a volume from Amazon S3 to the new instance
- 4: Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region for the destination
- 5: Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the second Region using that EBS volume
A solutions architect is creating a document submission application for a school. The application will use an Amazon S3 bucket for storage. The solution must prevent accidental deletion of the documents and ensure that all versions of the documents are available. Users must be able to upload and modify the documents. Which combination of actions should be taken to meet these requirements? (Select TWO)
1: Set read-only permissions on the bucket
2: Enable versioning on the bucket
3: Attach an IAM policy to the bucket
4: Enable MFA Delete on the bucket
5: Encrypt the bucket using AWS SSE-S3
1: Set read-only permissions on the bucket
2: Enable versioning on the bucket
3: Attach an IAM policy to the bucket
4: Enable MFA Delete on the bucket
5: Encrypt the bucket using AWS SSE-S3
A solutions architect is designing an application on AWS. The compute layer will run in parallel across EC2 instances. The compute layer should scale based on the number of jobs to be processed. The compute layer is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored. Which design should the solutions architect use?
1: Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage
2: Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage
3: Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue
4: Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic
1: Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage
2: Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage
3: Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue
4: Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic
A team are planning to run analytics jobs on log files each day and require a storage solution. The size and number of logs is unknown and data will persist for 24 hours only. What is the MOST cost-effective solution?
1: Amazon S3 Glacier Deep Archive
2: Amazon S3 Standard
3: Amazon S3 Intelligent-Tiering
4: Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) Edition.
1: Amazon S3 Glacier Deep Archive
2: Amazon S3 Standard
3: Amazon S3 Intelligent-Tiering
4: Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) Edition.
A company runs a web application that serves weather updates. The application runs on a fleet of Amazon EC2 instances in a Multi-AZ Auto scaling group behind an Application Load Balancer (ALB). The instances store data in an Amazon Aurora database. A solutions architect needs to make the application more resilient to sporadic increases in request rates. Which architecture should the solutions architect implement? (Select TWO)
1: Add and AWS WAF in front of the ALB
2: Add Amazon Aurora Replicas
3: Add an AWS Transit Gateway to the Availability Zones
4: Add an AWS Global Accelerator endpoint
5: Add an Amazon CloudFront distribution in front of the ALB
1: Add and AWS WAF in front of the ALB
2: Add Amazon Aurora Replicas
3: Add an AWS Transit Gateway to the Availability Zones
4: Add an AWS Global Accelerator endpoint
5: Add an Amazon CloudFront distribution in front of the ALB
An Amazon VPC contains several Amazon EC2 instances. The instances need to make API calls to Amazon DynamoDB. A solutions architect needs to ensure that the API calls do not traverse the internet. How can this be accomplished? (Select TWO)
1: Create a route table entry for the endpoint
2: Create a gateway endpoint for DynamoDB
3: Create a new DynamoDB table that uses the endpoint
4: Create an ENI for the endpoint in each of the subnets of the VPC
5: Create a VPC peering connection between the VPC and DynamoDB
1: Create a route table entry for the endpoint
2: Create a gateway endpoint for DynamoDB
3: Create a new DynamoDB table that uses the endpoint
4: Create an ENI for the endpoint in each of the subnets of the VPC
5: Create a VPC peering connection between the VPC and DynamoDB
A solutions architect is designing the infrastructure to run an application on Amazon EC2 instances. The application requires high availability and must dynamically scale based on demand to be cost efficient. What should the solutions architect do to meet these requirements?
1: Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Regions
2: Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy instances to multiple Regions
3: Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones
4: Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances to multiple Availability Zones
1: Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Regions
2: Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy instances to multiple Regions
3: Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones
4: Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances to multiple Availability Zones
A retail company with many stores and warehouses is implementing IoT sensors to gather monitoring data from devices in each location. The data will be sent to AWS in real time. A solutions architect must provide a solution for ensuring events are received in order for each device and ensure that data is saved for future processing. Which solution would be MOST efficient?
1: Use Amazon Kinesis Data Streams for real-time events with a partition key for each device. Use Amazon Kinesis Data Firehose to save data to Amazon S3
2: Use Amazon Kinesis Data Streams for real-time events with a shard for each device. Use Amazon Kinesis Data Firehose to save data to Amazon EBS
3: Use an Amazon SQS FIFO queue for real-time events with one queue for each device. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS
4: Use an Amazon SQS standard queue for real-time events with one queue for each device. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3
1: Use Amazon Kinesis Data Streams for real-time events with a partition key for each device. Use Amazon Kinesis Data Firehose to save data to Amazon S3
2: Use Amazon Kinesis Data Streams for real-time events with a shard for each device. Use Amazon Kinesis Data Firehose to save data to Amazon EBS
3: Use an Amazon SQS FIFO queue for real-time events with one queue for each device. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS
4: Use an Amazon SQS standard queue for real-time events with one queue for each device. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3
An organization want to share regular updates about their charitable work using static webpages. The pages are expected to generate a large amount of views from around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution. Which action should the solutions architect take to accomplish this?
1: Generate presigned URLs for the files
2: Use cross-Region replication to all Regions
3: Use the geoproximity feature of Amazon Route 53
4: Use Amazon CloudFront with the S3 bucket as its origin
1: Generate presigned URLs for the files
2: Use cross-Region replication to all Regions
3: Use the geoproximity feature of Amazon Route 53
4: Use Amazon CloudFront with the S3 bucket as its origin
An insurance company has a web application that serves users in the United Kingdom and Australia. The application includes a database tier using a MySQL database hosted in eu-west-2. The web tier runs from eu-west-2 and ap-southeast-2. Amazon Route 53 geoproximity routing is used to direct users to the closest web tier. It has been noted that Australian users receive slow response times to queries. Which changes should be made to the database tier to improve performance?
1: Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in the Australian Region
2: Migrate the database to Amazon DynamoDB. Use DynamoDB global tables to enable replication to additional Regions
3: Deploy MySQL instances in each Region. Deploy an Application Load Balancer in front of MySQL to reduce the load on the primary instance
4: Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in ap-southeast-2
1: Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in the Australian Region
2: Migrate the database to Amazon DynamoDB. Use DynamoDB global tables to enable replication to additional Regions
3: Deploy MySQL instances in each Region. Deploy an Application Load Balancer in front of MySQL to reduce the load on the primary instance
4: Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in ap-southeast-2
A web application runs in public and private subnets. The application architecture consists of a web tier and database tier running on Amazon EC2 instances. Both tiers run in a single Availability Zone (AZ). Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO)
1: Create new public and private subnets in the same AZ for high availability
2: Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs
3: Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB)
4: Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ
5: Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment
1: Create new public and private subnets in the same AZ for high availability
2: Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs
3: Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB)
4: Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ
5: Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment
An application running on an Amazon ECS container instance using the EC2 launch type needs permissions to write data to Amazon DynamoDB. How can you assign these permissions only to the specific ECS task that is running the application?
1: Create an IAM policy with permissions to DynamoDB and attach it to the container instance
2: Create an IAM policy with permissions to DynamoDB and assign It to a task using the taskRoleArn parameter
3: Use a security group to allow outbound connections to DynamoDB and assign it to the container instance
4: Modify the AmazonECSTaskExecutionRolePolicy policy to add permissions for DynamoDB
1: Create an IAM policy with permissions to DynamoDB and attach it to the container instance
2: Create an IAM policy with permissions to DynamoDB and assign It to a task using the taskRoleArn parameter
3: Use a security group to allow outbound connections to DynamoDB and assign it to the container instance
4: Modify the AmazonECSTaskExecutionRolePolicy policy to add permissions for DynamoDB
An organization has a large amount of data on Windows (SMB) file shares in their on-premises data center. The organization would like to move data into Amazon S3. They would like to automate the migration of data over their AWS Direct Connect link. Which AWS service can assist them?
1: AWS Database Migration Service (DMS)
2: AWS CloudFormation
3: AWS Snowball
4: AWS DataSync
1: AWS Database Migration Service (DMS)
2: AWS CloudFormation
3: AWS Snowball
4: AWS DataSync
The database tier of a web application is running on a Windows server on-premises. The database is a Microsoft SQL Server database. The application owner would like to migrate the database to an Amazon RDS instance. How can the migration be executed with minimal administrative effort and downtime?
1: Use the AWS Server Migration Service (SMS) to migrate the server to Amazon EC2.Use AWS Database Migration Service (DMS) to migrate the database to RDS
2: Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS
3: Use AWS DataSync to migrate the data from the database to Amazon S3. Use AWS Database Migration Service (DMS) to migrate the database to RDS
4: Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS. Use the Schema Conversion Tool (SCT) to enable conversion from Microsoft SQL Server to Amazon RDS
1: Use the AWS Server Migration Service (SMS) to migrate the server to Amazon EC2.Use AWS Database Migration Service (DMS) to migrate the database to RDS
2: Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS
3: Use AWS DataSync to migrate the data from the database to Amazon S3. Use AWS Database Migration Service (DMS) to migrate the database to RDS
4: Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS. Use the Schema Conversion Tool (SCT) to enable conversion from Microsoft SQL Server to Amazon RDS
A new application will run across multiple Amazon ECS tasks. Front-end application logic will process data and then pass that data to a back-end ECS task to perform further processing and write the data to a datastore. The Architect would like to reduce-interdependencies so failures do no impact other components. Which solution should the Architect use?
1: Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data to the stream and the back-end to read data from the stream
2: Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3 bucket, configure the front-end to write data to the stream and the back-end to read data from Amazon S3
3: Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-end to add messages to the queue
4: Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages
1: Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data to the stream and the back-end to read data from the stream
2: Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3 bucket, configure the front-end to write data to the stream and the back-end to read data from Amazon S3
3: Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-end to add messages to the queue
4: Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages
An application receives images uploaded by customers and stores them on Amazon S3. An AWS Lambda function then processes the images to add graphical elements. The processed images need to be available for users to download for 30 days, after which time they can be deleted. Processed images can be easily recreated from original images. The Original images need to be immediately available for 30 days and be accessible within 24 hours for another 90 days. Which combination of Amazon S3 storage classes is most cost-effective for the original and processed images? (Select TWO)
1: Store the original images in STANDARD for 30 days, transition to GLACIER for 90 days, then expire the data
2: Store the original images in STANDARD_IA for 30 days and then transition to DEEP_ARCHIVE
3: Store the processed images in ONEZONE_IA and then expire the data after 30 days
4: Store the processed images in STANDARD and then transition to GLACIER after 30 days
5: Store the original images in STANDARD for 30 days, transition to DEEP_ARCHIVE for 90 days, then expire the data
1: Store the original images in STANDARD for 30 days, transition to GLACIER for 90 days, then expire the data
2: Store the original images in STANDARD_IA for 30 days and then transition to DEEP_ARCHIVE
3: Store the processed images in ONEZONE_IA and then expire the data after 30 days
4: Store the processed images in STANDARD and then transition to GLACIER after 30 days
5: Store the original images in STANDARD for 30 days, transition to DEEP_ARCHIVE for 90 days, then expire the data