saa-c02-part-16 Flashcards
A user wants to list the IAM role that is attached to their Amazon EC2 instance. The user has login access to the EC2 instance but does not have IAM permissions.
What should a solutions architect do to retrieve this information?
- Run the following EC2 command:
curl http://169.254.169.254/latest/meta-data/iam/info - Run the following EC2 command:
curl http://169.254.169.254/latest/user-data/iam/info - Run the following EC2 command:
http://169.254.169.254/latest/dynamic/instance-identity/ - Run the following AWS CLI command:
aws iam get-instance-profile –instance-profile-name ExampleInstanceProfile
- Run the following EC2 command:
curl http://169.254.169.254/latest/meta-data/iam/info
IAM role that is attached to their Amazon EC2 instance = meta-data
A company has an application that is hosted on Amazon EC2 instances in two private subnets. A solutions architect must make the application available on the public internet with the least amount of administrative effort.
What should the solutions architect recommend?
- Create a load balancer and associate two public subnets from the same Availability Zones as the private instances. Add the private instances to the load balancer.
- Create a load balancer and associate two private subnets from the same Availability Zones as the private instances. Add the private instances to the load balancer.
- Create an Amazon Machine Image (AMI) of the instances in the private subnet and restore in the public subnet. Create a load balancer and associate two public subnets from the same Availability Zones as the public instances.
- Create an Amazon Machine Image (AMI) of the instances in the private subnet and restore in the public subnet. Create a load balancer and associate two private subnets from the same Availability Zones as the public instances.
- Create a load balancer and associate two public subnets from the same Availability Zones as the private instances. Add the private instances to the load balancer.
public internet = public subnet needed = 1,3
least amount of administrative effort = 1
ALBs go in public subnets
https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/
A company has two applications: a sender application that sends messages with payloads to be processed and a processing application intended to receive messages with payloads. The company wants to implement an AWS service to handle messages between the two applications. The sender application can send about 1,000 messages each hour. The messages may take up to 2 days to be processed. If the messages fail to process, they must be retained so that they do not impact the processing of any remaining messages.
Which solution meets these requirements and is the MOST operationally efficient?
- Set up an Amazon EC2 instance running a Redis database. Configure both applications to use the instance. Store, process, and delete the messages, respectively.
- Use an Amazon Kinesis data stream to receive the messages from the sender application. Integrate the processing application with the Kinesis Client Library (KCL).
- Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue to collect the messages that failed to process.
- Subscribe the processing application to an Amazon Simple Notification Service (Amazon SNS) topic to receive notifications to process. Integrate the sender application to write to the SNS topic.
- Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue to collect the messages that failed to process.
handle messages between the two applications = SQS
A company’s website hosted on Amazon EC2 instances processes classified data stored in Amazon S3. Due to security concerns, the company requires a private and secure connection between its EC2 resources and Amazon S3.
Which solution meets these requirements?
- Set up S3 bucket policies to allow access from a VPC endpoint.
- Set up an IAM policy to grant read-write access to the S3 bucket.
- Set up a NAT gateway to access resources outside the private subnet.
- Set up an access key ID and a secret access key to access the S3 bucket.
- Set up S3 bucket policies to allow access from a VPC endpoint.
private and secure connection between its EC2 resources and Amazon S3 = endpoint
A company hosts its multi-tier, public web application in the AWS Cloud. The web application runs on Amazon EC2 instances and its database runs on Amazon RDS. The company is anticipating a large increase in sales during an upcoming holiday weekend. A solutions architect needs to build a solution to analyze the performance of the web application with a granularity of no more than 2 minutes.
What should the solutions architect do to meet this requirement?
- Send Amazon CloudWatch logs to Amazon Redshift. Use Amazon QuickSight to perform further analysis.
- Enable detailed monitoring on all EC2 instances. Use Amazon CloudWatch metrics to perform further analysis.
- Create an AWS Lambda function to fetch EC2 logs from Amazon CloudWatch Logs. Use Amazon CloudWatch metrics to perform further analysis.
- Send EC2 logs to Amazon S3. Use Amazon Redshift to fetch logs from the S3 bucket to process raw data for further analysis with Amazon QuickSight.
- Enable detailed monitoring on all EC2 instances. Use Amazon CloudWatch metrics to perform further analysis.
granularity of no more than 2 minutes = Cloudwatch default = 5 minutes, faster = enable detailed monitoring on the instance = 1 minute
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch.html
A company has developed a new video game as a web application. The application is in a three-tier architecture in a VPC with Amazon RDS for MySQL. In the database layer several players will compete concurrently online. The game’s developers want to display a top-10 scoreboard in near-real time and offer the ability to stop and restore the game while preserving the current scores.
What should a solutions architect do to meet these requirements?
- Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web application to display.
- Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
- Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard in a section of the application.
- Create a read replica on Amazon RDS for MySQL to run queries to compute the scoreboard and serve the read traffic to the web application.
- Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
scoreboard = leaderboard = redis or dynamodb
real-time analytics = redis + ElastiCache
https://aws.amazon.com/blogs/database/building-a-real-time-gaming-leaderboard-with-amazon-elasticache-for-redis/
A company is moving its on-premises Oracle database to Amazon Aurora PostgreSQL. The database has several applications that write to the same tables. The applications need to be migrated one by one with a month in between each migration Management has expressed concerns that the database has a high number of reads and writes. The data must be kept in sync across both databases throughout the migration.
What should a solutions architect recommend?
- Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a change data capture (CDC) replication task and a table mapping to select all cables.
- Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
- Use the AWS Schema Conversion Tool with AWS DataBase Migration Service (AWS DMS) using a memory optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
- Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a compute optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select the largest tables.
- Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
migrating Oracle to PostgreSQL = Database Migration Service = 1,2
“(AWS DMS) to replicate the data first in a bulk load” = 2
https://aws.amazon.com/blogs/database/migrating-an-application-from-an-on-premises-oracle-database-to-amazon-rds-for-postgresql/
A company recently migrated a message processing system to AWS. The system receives messages into an ActiveMQ queue running on an Amazon EC2 instance. Messages are processed by a consumer application running on Amazon EC2. The consumer application processes the messages and writes results to a MySQL database running on Amazon EC2. The company wants this application to be highly available with low operational complexity.
Which architecture offers the HIGHEST availability?
- Add a second ActiveMQ server to another Availability Zone. Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone.
- Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone.
- Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled.
- Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use Amazon RDS for MySQL with Multi-AZ enabled.
- Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use Amazon RDS for MySQL with Multi-AZ enabled.
highly available with low operational complexity = Multi-AZ = 3,4
HIGHEST availability = ASG = 4 wins
A company is planning on deploying a newly built application on AWS in a default VPC. The application will consist of a web layer and database layer. The web server was created in public subnets, and the MySQL database was created in private subnets. All subnets are created with the default network ACL settings, and the default security group in the VPC will be replaced with new custom security groups.
The following are the key requirements:
– The web servers must be accessible only to users on an SSL connection.
– The database should be accessible to the web layer, which is created in a public subnet only.
– All traffic to and from the IP range 182.20.0.0/16 subnet should be blocked.
Which combination of steps meets these requirements? (Select two.)
- Create a database server security group with inbound and outbound rules for MySQL port 3306 traffic to and from anywhere (0 0.0.0/0).
- Create a database server security group with an inbound rule for MySQL port 3306 and specify the source as a web server security group.
- Create a web server security group with an inbound allow rule for HTTPS port 443 traffic from anywhere (0.0.0.0/0) and an inbound deny rule for IP range 182.20.0.0/16.
- Create a web server security group with an inbound rule for HTTPS port 443 traffic from anywhere (0.0.0.0/0). Create network ACL inbound and outbound deny rules for IP range 182.20.0.0/16.
- Create a web server security group with inbound and outbound rules for HTTPS port 443 traffic to and from anywhere (0.0.0.0/0). Create a network ACL inbound deny rule for IP range 182.20.0.0/16.
- Create a database server security group with an inbound rule for MySQL port 3306 and specify the source as a web server security group.
- Create a web server security group with an inbound rule for HTTPS port 443 traffic from anywhere (0.0.0.0/0). Create network ACL inbound and outbound deny rules for IP range 182.20.0.0/16.
web servers +SSl = 443 = 3,4,5
database should be accessible = MySQL port 3306 = 1,2
to and from anywhere (0 0.0.0/0). = not least privilege = not 1 = 2 wins
All traffic to and from the IP range 182.20.0.0/16 subnet should be blocked = not subgroup needs higher level = ACL = 4,5
port 443 traffic to and from anywhere (0.0.0.0/0)= 5 is wrong only inbound needed = 4 wins
A company has an on-premises application that collects data and stores it to an on-premises NFS server. The company recently set up a 10 Gbps AWS Direct Connect connection. The company is running out of storage capacity on premises. The company needs to migrate the application data from on premises to the AWS Cloud while maintaining low-latency access to the data from the on-premises application.
What should a solutions architect do to meet these requirements?
- Deploy AWS Storage Gateway for the application data, and use the file gateway to store the data in Amazon S3. Connect the on-premises application servers to the file gateway using NFS.
- Attach an Amazon Elastic File System (Amazon EFS) file system to the NFS server, and copy the application data to the EFS file system. Then connect the on-premises application to Amazon EFS.
- Configure AWS Storage Gateway as a volume gateway. Make the application data available to the on-premises application from the NFS server and with Amazon Elastic Block Store (Amazon EBS) snapshots.
- Create an AWS DataSync agent with the NFS server as the source location and an Amazon Elastic File System (Amazon EFS) file system as the destination for application data transfer. Connect the on-premises application to the EFS file system.
- Deploy AWS Storage Gateway for the application data, and use the file gateway to store the data in Amazon S3. Connect the on-premises application servers to the file gateway using NFS.
on-premises = gateway needed = 1,3
low-latency access + NFS=file gateway
When do I use AWS DataSync and when do I use AWS Storage Gateway?
Use AWS DataSync to migrate existing data to Amazon S3, and subsequently use the File Gateway configuration of AWS Storage Gateway to retain access to the migrated data and for ongoing updates from your on-premises file-based applications.
A solutions architect needs to design a network that will allow multiple Amazon EC2 instances to access a common data source used for mission-critical data that can be accessed by all the EC2 instances simultaneously. The solution must be highly scalable, easy to implement and support the NFS protocol.
Which solution meets these requirements?
- Create an Amazon Elastic File System (Amazon EFS) file system. Configure a mount target in each Availability Zone. Attach each instance to the appropriate mount target.
- Create an additional EC2 instance and configure it as a file server. Create a security group that allows communication between the Instances and apply that to the additional instance.
- Create an Amazon S3 bucket with the appropriate permissions. Create a role in AWS IAM that grants the correct permissions to the S3 bucket. Attach the role to the EC2 Instances that need access to the data.
- Create an Amazon Elastic Block Store (Amazon EBS) volume with the appropriate permissions. Create a role in AWS IAM that grants the correct permissions to the EBS volume. Attach the role to the EC2 instances that need access to the data.
- Create an Amazon Elastic File System (Amazon EFS) file system. Configure a mount target in each Availability Zone. Attach each instance to the appropriate mount target.
common data source = concurrent = EFS
NFS = EFS
A company hosts its application using Amazon Elastic Container Service (Amazon ECS) and wants to ensure high availability. The company wants to be able to deploy updates to its application even if nodes in one Availability Zone are not accessible.
The expected request volume for the application is 100 requests per second, and each container task is able to serve at least 60 requests per second. The company set up Amazon ECS with a rolling update deployment type with the minimum healthy percent parameter set to 50% and the maximum percent set to 100%.
Which configuration of tasks and Availability Zones meets these requirements?
- Deploy the application across two Availability Zones, with one task in each Availability Zone.
- Deploy the application across two Availability Zones, with two tasks in each Availability Zone.
- Deploy the application across three Availability Zones, with one task in each Availability Zone.
- Deploy the application across three Availability Zones, with two tasks in each Availability Zone.
100 requests per second = 2*60 tasks needed for processing = two tasks in each Availability Zone = 2,4
HA = Multi-AZ = 3 AZ needed if “deploy updates to its application even if nodes in one Availability Zone are not accessible” = 4 wins
minimum healthy percent parameter set to 50%
The 50% minimum healthy limit is critical here. IT means worst case we may have only 1 task running out of 2 tasks in order for that AZ to be up
A solutions architect wants all new users to have specific complexity requirements and mandatory rotation periods for IAM user passwords. What should the solutions architect do to accomplish this?
- Set an overall password policy for the entire AWS account
- Set a password policy for each IAM user in the AWS account.
- Use third-party vendor software to set password requirements.
- Attach an Amazon CloudWatch rule to the Create_newuser event to set the password with the appropriate requirements.
- Set an overall password policy for the entire AWS account
for each IAM user = for each user is typically wrong answer
A company wants to improve the availability and performance of its hybrid application. The application consists of a stateful TCP-based workload hosted on Amazon EC2 instances in different AWS Regions and a stateless UOP-based workload hosted on premises.
Which combination of actions should a solutions architect take to improve availability and performance? (Choose two.)
- Create an accelerator using AWS Global Accelerator. Add the load balancers as endpoints.
- Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the load balancers.
- Configure two Application Load Balancers in each Region. The first will route to the EC2 endpoints and the second will route to the on-premises endpoints.
- Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure a Network Load Balancer in each Region that routes to the on-premises endpoints.
- Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure an Application Load Balancer in each Region that routes to the on-premises endpoints
- Create an accelerator using AWS Global Accelerator. Add the load balancers as endpoints.
- Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure a Network Load Balancer in each Region that routes to the on-premises endpoints.
TCP layer4 = NLB
different AWS Regions = Global Accelerator
ALB = layer7 = 5 wrong
A solutions architect is designing the architecture of a new application being deployed to the AWS Cloud. The application will run on Amazon EC2 On-Demand Instances and will automatically scale across multiple Availability Zones. The EC2 instances will scale up and down frequently throughout the day. An Application Load Balancer (ALB) will handle the load distribution. The architecture needs to support distributed session data management. The company is willing to make changes to code if needed.
What should the solutions architect do to ensure that the architecture supports distributed session data management?
- Use Amazon ElastiCache to manage and store session data.
- Use session affinity (sticky sessions) of the ALB to manage session data.
- Use Session Manager from AWS Systems Manager to manage the session.
- Use the GetSessionToken API operation in AWS Security Token Service (AWS STS) to manage the session.
- Use Amazon ElastiCache to manage and store session data.
distributed session data management = not sticky = not 2
Session Manager is to manage EC2 instances and other devices, servers, and VMs you operate = 3 wrong
distributed session data management = good use case for ElastiCache
STS is to request temporary credentials for IAM users = 4 wrong