april - may 2024 Flashcards
A company has developed a multi-account strategy on AWS by using AWS control Tower. The company has provided individual AWS accounts to each of its developers. The company wants to implement controls to limit AWS resource costs that the developers incur. Which solution will meet these requirements with the LEAST operational overhead?
A. Instruct each developer to tag all their resources with a tag that has a key of CostCenter and a value of the developers name. Use the required tags AWS config managed rule to check for the tag. Create an AWS Lambda function to terminate resources that do not have the tag. Configure AWS Cost Explorer to send a daily report to each developer to monitor their spending.
B. Use AWS Budgets to establish budgets for each developer account. Set up budget alerts for actual and forecast values to notify developers when they exceed or expect to exceed their assigned budget. Use AWS Budgets actions to apply a DenyAll policy to the developer’s IAM role to prevent additional resources from being launched when the assigned budget is reached.
C. Use AWS Cost Explorer to monitor and report on costs for each developer account. Configure Cost Explorer to send a daily report to each developer to monitor their spending. Use AWS Cost Anomaly Detection to detect anomalous spending and provide alerts
D. Use AWS Service Catalog to allow developers to launch resources within a limited cost range. Create AWS Lambda functions in each AWS account to stop running resources at the end of each work day. Configure the Lambda functions to resume the resources at the start of each work day.
B. Use AWS Budgets to establish budgets for each developer account. Set up budget alerts for actual and forecast values to notify developers when they exceed or expect to exceed their assigned budget. Use AWS Budgets actions to apply a DenyAll policy to the developer’s IAM role to prevent additional resources from being launched when the assigned budget is reached.
- C doesn’t enforce
- others are not as cost effective
A solutions architect is designing a three tier web app. The architecture consists of an internet facing application load balancer (ALB) and a web tier that is hosted on Amazon EC2 instances in private subnets. The application tier with the business logic runs on EC2 instanced in private subnets. The database tier consists of Microsoft SQL Server that runs on EC2 instances in private subnets. Security is a high priority for the company. Which combination of security group configurations should the solutions architect use? (CHOOSE 3)
A. Configure the security group for the web tier to allow inbound HTTPS traffic from the security group for the ALB
B. Configure the security group for the web tier to allow outbound HTTPS traffic to 0.0.0.0/0.
C. Configure the security group for the database tier to allow inbound Microsoft SQL Server traffic from the security group for the application tier.
D. Configure the security group for the database tier to allow outbound HTTPS traffic and Microsoft SQL Server traffic to the security group for the web tier
E. Configure the security group for the application tier to allow inbound HTTPS traffic from the security group for the web tier
F. Configure the security group for the application tier to allow outbound HTTPS traffic and Microsoft SQL Server traffic to the security group for the web tier
A. Configure the security group for the web tier to allow inbound HTTPS traffic from the security group for the ALB
C. Configure the security group for the database tier to allow inbound Microsoft SQL Server traffic from the security group for the application tier.
E. Configure the security group for the application tier to allow inbound HTTPS traffic from the security group for the web tier
- the application tier shouldn’t need to connect to the web tier for this (eliminates D/F)
- do not need anything with an IP address (B)
A company has released a new version of its production application. The company’s workload uses Amazon EC2, AWS Lambda, AWS Fargate, and Amazon SageMaker. The company wants to cost optimize the workload now that usage is at a steady state, The company wants to cover the most services with the fewest savings plans. Which combination of savings plans will meet these requirements? (Choose 2)
A. Purchase an EC2 instance Savings Plan for Amazon EC2 and SageMaker
B. Purchase a compute savings plan for Amazon EC2, Lambda, and SageMaker
C. Purchase a SageMaker savings plan
D. Purchase a compute savings plan for Lambda, Fargate, and Amazon EC2
E. Purchase an EC2 instance savings plan for Amazon EC2 and Fargate
C. Purchase a SageMaker savings plan
D. Purchase a compute savings plan for Lambda, Fargate, and Amazon EC2
- EC2 savings plans do not help with serverless instances (A/E)
- Compute services do NOT cover SageMaker (B)
A company uses a Microsoft SQL Server database. The companys apps are connected to the database. The company wants to migrate to an Amazon Aurora PostgreSQL database with minimal changes to the application code. Which combination of steps will meet these requirements? (Choose 2)
A. Use the AWS Schema Conversion Tool (AWS SCT) to rewrite the SQL queries in the apps
B. Enable Babelfish on Aurora PostgreSQL to run the SQL queries from the apps
C. Migrate the database schema and data by using the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS)
D. Use Amazon RDS Proxy to connect the apps to Aurora PostgreSQL
E. Use AWS Database Migration Service (AWS DMS) to rewrite the SQL queries in the apps
B. Enable Babelfish on Aurora PostgreSQL to run the SQL queries from the apps
C. Migrate the database schema and data by using the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS)
- Babelfish helps translate script language automatically
- AWS SCT is still a manual conversion and prone to errors
- RDS Proxy = connection management. This does nothing for SQL compatibility
- DMS is data migration not SQL conversion
A company plans to rehost an app to Amazon EC2 instances that use Amazon Elastic Block Store (Amazon EBS) as the attached storage. A solutions architect must design a solution to ensure that all newly created Amazon EBS volumes are encrypted by default. The solution must also prevent the creation of unencrypted EBS volumes. Which solution will meet these requirements?
A. Configure the EC2 account attributes to always encrypt new EBS volumes.
B. Use AWS Config. Configure the encrypted-volumes identifier. Apply the default AWS Key Management Service (AWS KMS) key.
C. Configure AWS Systems Manager to create encrypted copies of the EBS volumes. Reconfigure the EC2 instances to use the encrypted volumes.
D. Create a customer managed key in AWS Key Management Service (AWS KMS). Configure AWS Migration Hub to use the key when the company migrates workloads.
B. Use AWS Config. Configure the encrypted-volumes identifier. Apply the default AWS Key Management Service (AWS KMS) key.
- AWS Config allows you to make encryption rules
- A doesn’t prevent creation of unencrypted volumes
- C is extra work by making copies of unencrypted volumes
- D does no encryption
An ecommerce company wants to collect user clickstream data from the companys website for real-time analysis. The website experiences fluctuating traffic patterns throughout the dat. The company needs a scalable solution that can adapt to varying levels of traffic. Which solution will meet these requirements?
A. Use a data stream in Amazon Kinesis Data Streams in on-demand mode to capture the clickstream data. Use AWS Lambda to process the data in real time
B. Use Amazon Kinesis Data Firehose to capture the clickstream data. Use AWS Glue to process the data in real time
C. Use Amazon kinesis Video Streams to capture the clickstream data. Use AWS Glue to process the data in real time
D. Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to capture the clickstream data, Use AWS Lambda to process the data in real time.
A. Use a data stream in Amazon Kinesis Data Streams in on-demand mode to capture the clickstream data. Use AWS Lambda to process the data in real time
-Kinesis Data Streams =clickstream events data management. On-demand is more cost effective
- Firehose delivers data to destinations like S3 for batch analysis and Glue is not used for real time processing
- Kinesis Video Streams is for video data, not clickstream events
- Flink is complex
A global company runs its workloads on AWS. The companys app uses Amazon S3 buckets across AWS Regions for sensitive data storage and analysis. The company stores millions of objects in multiple S3 buckets daily. The company wants to identify all S3 buckets that are not versioning-enabled. Which solution will meet these requirements?
A. Set up and AWS CloudTrail event that has a rule to identify all S3 buckets that are not versioning-enabled across Regions
B. Use Amazon S3 Storage Lens to identify all S3 buckets that are not versioning-enabled across Regions
C. Enable IAM Access Analyzer for S3 to identify all S3 buckets that are not versioning-enabled across Regions
D. Create an S3 Multi-Region Access Point(MRAP) to identify all S3 buckets that are not versioning-enabled across Regions
B. Use Amazon S3 Storage Lens to identify all S3 buckets that are not versioning-enabled across Regions
- CloudTrail does log versioning status but requires manual extraction
- Access Analyzer focuses on permission analysis not bucket versioning
- MRAP is for data access and replication, not buckets
A company needs to optimize its Amazon S3 storage costs for an app that generates many files that cannot be recreated. Each file is approximately 5 MB and is stored in Amazon S3 Standard storage. The company must store the files for 4 years before the files can be deleted. The files must be immediately accessible. The files are frequently accessed in the first 30 days of object creation, but they are rarely accessed after the first 30 days. Which solution will meet these requirements MOST cost-effectively?
A. Create an S3 lifecycle policy to move the files to S3 Glacier Instant Retrieval 30 days after object creation. Delete the files 4 years after object creation
B. Create an S3 lifecycle policy to move the files to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days after object creation. Delete the files 4 years after object creation
C. Create an S3 lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days after object creation. Delete the files 4 years after object creation
D. Create an S3 lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days after object creation. Move the files to S3 Glacier Flexible Retrieval 4 years after object creation
A. Create an S3 lifecycle policy to move the files to S3 Glacier Instant Retrieval 30 days after object creation. Delete the files 4 years after object creation
- One-Zone holds all the files and if this is lost it can’t be created so B is not the answer
- Use standard for immediate/frequent access
- no pattern = intelligence tier
- rarely accessed=glacier
- infrequent access = standard infrequent
A company runs its critical storage app in the AWS cloud. The app uses amazon S3 in two AWS regions. The company wants the app to send remote user data to the nearest S3 bucket with no public network congestion. The company also wants the app to fail over with the least amount of management of Amazon S3. Which solution will meet these requirements?
A. Implement an active-active design between the two regions. Configure the app to use the regional S3 endpoints closest to the user.
B. Use an active-passive configuration with S3 Multi-Region Access Pints. Create a global endpoint for each of the Regions.
C. Send user data to the regional S3 endpoints closest to the user. Configure an S3 cross-account replication rule to keep the S3 buckets synchronized.
D. Set up Amazon S3 to use Multi-Region Access Points in an active-active configuration with a single global endpoint. Configure S3 Cros-Region Replication
D. Set up Amazon S3 to use Multi-Region Access Points in an active-active configuration with a single global endpoint. Configure S3 Cros-Region Replication
- using config based on the relative location to the user is manual and inefficient (A)
- active-passive set ups require manual intervention in case of fail over (B)
- lacks intelligent routing and fail over access points (C)
- global endpoints intelligently route users to the nearest S3 bucket with minimal network latency (D)
A company is migrating a data center from its on prem location to AWS. The company has several legacy applications that are hosted on individual virtual servers. Changes to the app designs cannot be made. Each individual virtual server currently runs as its own EC2 instance. A solutions architect needs to ensure that the apps are reliable and fault tolerant after migration to AWS. The apps will run on Amazon EC2 instances. Which solution will meet these requirements?
A. Create an Auto Scaling group that has a minimum of one and a maximum of one. Create an Amazon Machine Image (AMI) of each app instance. Use the AMI to create EC2 instances in the Auto Scaling group Configure an Application Load Balancer in front of the Auto Scaling group.
B. Use AWS Backup to create an hourly backup of the EC2 instance that hosts each app. Store the backup in Amazon S3 in a separate Availability Zone. Configure a disaster recovery process to restore the EC2 instance for each app from its most recent backup.
C. Create an Amazon Machine Image (AMI) of each app instance. Launch two new EC2 instances from the AMI. Place each EC2 instance in a separate Availability Zone. Configure a Network Load Balancer that has the EC2 instances as targets
D. Use AWS Migration Hub Refactor Spaces to migrate each app off the EC2 instance. Break down functionality from each app into individual components. Host each app on Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type.
C. Create an Amazon Machine Image (AMI) of each app instance. Launch two new EC2 instances from the AMI. Place each EC2 instance in a separate Availability Zone. Configure a Network Load Balancer that has the EC2 instances as targets
- auto scaling with min/max set to ONE is invalid bc it will be too restricted and won’t be able to launch a new instance if the first instance fails
- Backup doesn’t prevent downtime, but its used for recovery AFTER failure
- containerization and fargate require customization of the app to use and we want minimal changes
A company wants to isolate its workloads by creating an AWS account for each workload. The company needs a solution that centrally manages networking components for the workloads. The solution also must create accounts with automatic security controls (guardrails). Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Control Tower to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts
B. Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
C. Use AWS Control Tower to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
D. Use AWS Organizations to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
B. Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
- Control Tower is broader than organization(A/C)
- Creating a VPC in EACH WORKLOAD increases work and complexity
A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB). The website serves static content. Website traffic is increasing. The company wants to minimize the website hosting costs. Which solution will meet these requirements?
A. Move the website to an Amazon S3 bucket. Configure an Amazon CloudFront distribution for the S3 bucket
B. Move the website to an Amazon S3 bucket. Configure an Amazon ElastiCache cluster for the S3 bucket
C. Move the website to AWS Amplify. Configure an ALB to resolve to the Amplify website
D. Move the website to AWS Amplify. Configure EC2 instances to cache the website
A. Move the website to an Amazon S3 bucket. Configure an Amazon CloudFront distribution for the S3 bucket
-Static Content = S3 (C/D)
- ElastiCache is more variable and to meet elastic demands and adds complexity/costs compared to CloudFront which is more cookie cutter
A company is implementing a shared storage solution for a media application that the company hosts on AWS. The company needs the ability to use SMB clients to access stored data. Which solution will meet these requirements with the LEAST admin overhead?
A. Create an AWS Storage Gateway Volume Gateway. Create a file share that uses the required client protocol. Connect the app server to the file share
B. Create an AWS Storage Gateway Tape Gateway. Configure tapes to use Amazon S3. Connect the app server to the Tape Gateway.
C. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the app server to the file share
D. Create an Amazon FSx for Windows File Server file system. Connect the app server to the file system
D. Create an Amazon FSx for Windows File Server file system. Connect the app server to the file system
- Volume/Tape Gateway does NOT support SMB. They are more used for backup (a/b)
- EC2 is self managed and manual (C)
- FSx is a AWS managed service
A company is designing its production applications disaster recovery (DR) strategy. The app is backed by a MySQL database on an Amazon Aurora cluster in the us-east-1 Region. The company has chosen the us-west-1 Region as its DR Region. The company’s target recovery point objective (RPO) is 5 mins and the target recovery time objective (RTO) is 20 mins. The company wants to minimize config changes. Which solution will meet these requirements with the MOST operational efficiency?
A. Create an Aurora read replica in us-west-1 similar in size to the production applications Aurora MySQL cluster writer instance.
B. Convert the Aurora cluster to an Aurora global database. Configure managed failover
C. Create a new Aurora cluster in us-west-1 that has Cross0Region Replication
D. Create a new Aurora cluster in us-west-1. Use AWS Database Migration Service (AWS DMS) to sync both clusters
B. Convert the Aurora cluster to an Aurora global database. Configure managed failover
- Read replicas don’t help with fast failover (A)
- Aurora cluster has higher RPO than Aurora global database and requires more complex set up (C)
- DMS is for more database migration (D)
A company runs a critical data analysis job each week before the first day of the work week. The job requires at least 1 hour to complete the analysis. The job is stateful and cannot tolerate interruptions. The company needs a solution to run the job on AWS. Which solution will meet these requirements?
A. Create a container for the job. Schedule the job to run as an AWS Fargate task on an Amazon Elastic Container Service (Amazon ECS) cluster by using Amazon EventBridge Scheduler
B. Configure the job to run in an AWS Lambda function. Create a scheduled rule in Amazon EventBridge to invoke the Lambda function
C. Configure an Auto Scaling group of Amazon EC2 Spot Instances that run Amazon Linux. Configure a crontab entry on the instances to run the analysis
D. Configure an AWS DataSync task to run the job. Configure a cron expression to run the task on a schedule
A. Create a container for the job. Schedule the job to run as an AWS Fargate task on an Amazon Elastic Container Service (Amazon ECS) cluster by using Amazon EventBridge Scheduler
- “1 hour” = remove Lambda (B)
- “stateful and cannot tolerate interruptions” = remove spot instances (C)
- DataSync is a data transfer service not used for analysis