Q151-200 Flashcards
A solutions architect is designing a hybrid application using the AWS cloud. The network between the on-premises data center and AWS will use an AWS Direct Connect (DX) connection. The application connectivity between AWS and the on-premises data center must be highly resilient. Which DX configuration should be implemented to meet these requirements?
A. Configure a DX connection with a VPN on top of it.
B. Configure DX connections at multiple DX locations.
C. Configure a DX connection using the most reliable DX partner.
D. Configure multiple virtual interfaces on top of a DX connection.
Option A (Configure a DX connection with a VPN on top of it): While it can add encryption and security through a VPN, it doesn’t inherently address geographic diversity or redundancy.
Option C (Configure a DX connection using the most reliable DX partner): This option focuses on selecting a reliable DX partner but does not inherently provide redundancy or geographic diversity.
Option D (Configure multiple virtual interfaces on top of a DX connection): This option offers redundancy within a single DX connection but does not address geographic diversity across multiple DX locations.
A company runs an application on Amazon EC2 instances. The application is deployed in private subnets in three Availability Zones of the us-east-1 Region. The instances must be able to connect to the internet to download files. The company wants a design that is highly available across the Region. Which solution should be implemented to ensure that there are no disruptions to internet connectivity?
A. Deploy a NAT instance in a private subnet of each Availability Zone.
B. Deploy a NAT gateway in a public subnet of each Availability Zone.
C. Deploy a transit gateway in a private subnet of each Availability Zone.
D. Deploy an internet gateway in a public subnet of each Availability Zone.
Option A, deploying a NAT instance in each private subnet, is not recommended because it requires manual management, is less scalable, and may not provide the same level of high availability as NAT gateways.
Option C, deploying a transit gateway in private subnets, is used for connecting multiple VPCs or on-premises networks, and it’s not the appropriate solution for providing internet connectivity to EC2 instances.
Option D, deploying an internet gateway in public subnets, is used for inbound and outbound internet traffic and is not suitable for providing outbound internet connectivity for resources in private subnets.
Application developers have noticed that a production application is very slow when business reporting users run large production reports against the Amazon RDS instance backing the application. The CPU and memory utilization metrics for the RDS instance do not exceed 60% while the reporting queries are running. The business reporting users must be able to generate reports without affecting the application’s performance. Which action will accomplish this?
A. Increase the size of the RDS instance.
B. Create a read replica and connect the application to it.
C. Enable multiple Availability Zones on the RDS instance.
D. Create a read replica and connect the business reports to it.
Option A, increasing the size of the RDS instance, may provide some improvement but doesn’t address the isolation of reporting queries from the application, and it can be costly.
Option B, creating a read replica and connecting the application to it, may improve application performance but doesn’t isolate reporting queries, which are typically resource-intensive.
Option C, enabling multiple Availability Zones on the RDS instance, enhances availability and fault tolerance but doesn’t directly address the performance issue related to reporting queries.
A company is running a two-tier ecommerce website using AWS services. The current architect uses a publish-facing Elastic Load Balancer that sends traffic to Amazon EC2 instances in a private subnet. The static content is hosted on EC2 instances, and the dynamic content is retrieved from a MySQL database. The application is running in the United States. The company recently started selling to users in Europe and Australia. A solutions architect needs to design solution so their international users have an improved browsing experience. Which solution is MOST cost-effective?
A. Host the entire website on Amazon S3.
B. Use Amazon CloudFront and Amazon S3 to host static images.
C. Increase the number of public load balancers and EC2 instances.
D. Deploy the two-tier website in AWS Regions in Europe and Australia.
Option A, hosting the entire website on Amazon S3, may not be suitable if the website relies on dynamic content served from EC2 instances and a MySQL database.
Option C, increasing the number of public load balancers and EC2 instances, could be costly and complex, and it may not provide the same level of latency reduction as a CDN like CloudFront.
Option D, deploying the two-tier website in AWS Regions in Europe and Australia, would increase operational complexity and cost significantly, as it involves maintaining separate infrastructure in multiple regions.
A company’s website provides users with downloadable historical performance reports. The website needs a solution that will scale to meet the company’s website demands globally. The solution should be cost-effective, limit the provisioning of infrastructure resources, and provide the fastest possible response time. Which combination should a solutions architect recommend to meet these requirements?
A. Amazon CloudFront and Amazon S3
B. AWS Lambda and Amazon DynamoDB
C. Application Load Balancer with Amazon EC2 Auto Scaling
D. Amazon Route 53 with internal Application Load Balancers
Option B (AWS Lambda and Amazon DynamoDB) is not the best fit for serving static downloadable reports, as it’s typically used for processing and serverless compute tasks, not for serving files.
Option C (Application Load Balancer with Amazon EC2 Auto Scaling) and Option D (Amazon Route 53 with internal Application Load Balancers) are more appropriate for dynamic web applications that require server-side processing. They may not be as cost-effective for serving static files and may not provide the same level of global distribution and low-latency response as Amazon CloudFront and S3.
A company wants to deploy a shared file system for its .NET application servers and Microsoft SQL Server databases running on Amazon EC2 instances with Windows Server 2016. The solution must be able to be integrated into the corporate Active Directory domain, be highly durable, be managed by AWS, and provide high levels of throughput and IOPS. Which solution meets these requirements?
A. Use Amazon FSx for Windows File Server.
B. Use Amazon Elastic File System (Amazon EFS).
C. Use AWS Storage Gateway in file gateway mode.
D. Deploy a Windows file server on two On Demand instances across two Availability Zones.
Option B (Amazon Elastic File System or Amazon EFS) can also provide shared file storage and integration with Active Directory, but it may not offer the same level of throughput and IOPS performance as Amazon FSx, and it’s not designed specifically for Windows environments.
Option C (AWS Storage Gateway in file gateway mode) is primarily used for hybrid cloud storage scenarios and may not provide the same level of performance as Amazon FSx for Windows File Server.
Option D (Deploying a Windows file server on EC2 instances) would require manual management and may not provide the same level of durability and ease of integration with Active Directory as Amazon FSx.
A company that develops web applications has launched hundreds of Application Load Balancers (ALBs) in multiple Regions. The company wants to create an allow list for the IPs of all the load balancers on its firewall device. A solutions architect is looking for a one-time, highly available solution to address this request, which will also help reduce the number of IPs that need to be allowed by the firewall. What should the solutions architect recommend to meet these requirements?
A. Create a AWS Lambda function to keep track of the IPs for all the ALBs in different Regions. Keep refreshing this list.
B. Set up a Network Load Balancer (NLB) with Elastic IPs. Register the private IPs of all the ALBs as targets to this NLB.
C. Launch AWS Global Accelerator and create endpoints for all the Regions. Register all the ALBs in different Regions to the corresponding endpoints.
D. Set up an Amazon EC2 instance, assign an Elastic IP to this EC2 instance, and configure the instance as a proxy to forward traffic to all the ALBs.
A. Creating a Lambda function to keep track of ALB IPs would require ongoing maintenance and might not provide the necessary global reach and performance.
B. Using an NLB with Elastic IPs to register ALB IPs as targets would also require manual management and might not provide the global reach needed.
D. Setting up an EC2 instance as a proxy adds unnecessary complexity and potential points of failure to the architecture.
A company runs an application using Amazon ECS. The application creates resized versions of an original image and then makes Amazon S3 API calls to store the resized images in Amazon S3. How can a solutions architect ensure that the application has permission to access Amazon S3?
A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container.
B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task definition.
C. Create a security group that allows access from Amazon ECS to Amazon S3, and update the launch configuration used by the ECS cluster.
D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS cluster while logged in as this account.
The other options mentioned (A, C, and D) involve less secure or less efficient methods of granting permissions and are not the recommended approach for managing permissions in an ECS environment.
A company is planning to migrate its virtual server-based workloads to AWS. The company has internet-facing load balancers backed by application servers. The application servers rely on patches from an internet-hosted repository. Which services should a solutions architect recommend be hosted on the public subnet? (Choose two.)
A. NAT gateway
B. Amazon RDS DB instances
C. Application Load Balancers
D. Amazon EC2 application servers
E. Amazon Elastic File System (Amazon EFS) volumes
B. Amazon RDS DB instances: Database instances are typically placed in private subnets to enhance security. Access to the databases can be controlled through security groups and network ACLs, and they do not need direct internet access. The application servers in the private subnet can communicate with the RDS instances.
D. Amazon EC2 application servers: Application servers should be placed in private subnets to minimize their exposure to the internet. They can securely access external repositories for patches via a NAT gateway or other controlled outbound access.
E. Amazon Elastic File System (Amazon EFS) volumes: Like application servers, Amazon EFS volumes are usually placed in private subnets to maintain security and control access. They can be accessed by the application servers in the private subnet, and outbound access to the internet for patching or updates can be controlled via a NAT gateway.
A company has established a new AWS account. The account is newly provisioned and no changed have been made to the default settings. The company is concerned about the security of the AWS account root user. What should be done to secure the root user?
A. Create IAM users for daily administrative tasks. Disable the root user.
B. Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.
C. Generate an access key for the root user. Use the access key for daily administration tasks instead of the AWS Management Console.
D. Provide the root user credentials to the most senior solutions architect. Have the solutions architect use the root user for daily administration tasks.
A. Create IAM users for daily administrative tasks. Disable the root user: Disabling the root user entirely can be risky because it might be needed in emergency situations. It’s generally better to secure it with MFA rather than disabling it.
C. Generate an access key for the root user. Use the access key for daily administration tasks instead of the AWS Management Console: This approach is not recommended because using access keys for the root user in daily tasks can be less secure than using IAM users with controlled permissions.
D. Provide the root user credentials to the most senior solutions architect. Have the solutions architect use the root user for daily administration tasks: Sharing root user credentials and using them for daily tasks is not a best practice and poses a security risk. It’s important to avoid sharing root credentials and instead use IAM users with fine-grained permissions.
A company is using a tape backup solution to store its key application data offsite. The daily data volume is around 50 TB. The company needs to retain the backups for 7 years for regulatory purposes. The backups are rarely accessed, and a week’s notice is typically given if a backup needs to be restored. The company is now considering a cloud-based option to reduce the storage costs and operational burden of managing tapes. The company also wants to make sure that the transition from tape backups to the cloud minimizes disruptions. Which storage solution is MOST cost-effective?
A. Use Amazon Storage Gateway to back up to Amazon Glacier Deep Archive.
B. Use AWS Snowball Edge to directly integrate the backups with Amazon S3 Glacier.
C. Copy the backup data to Amazon S3 and create a lifecycle policy to move the data to Amazon S3 Glacier.
D. Use Amazon Storage Gateway to back up to Amazon S3 and create a lifecycle policy to move the backup to Amazon S3 Glacier.
B. Use AWS Snowball Edge to directly integrate the backups with Amazon S3 Glacier: Less suitable due to complexity, logistical challenges, and operational overhead for daily backup operations.
C. Copy the backup data to Amazon S3 and create a lifecycle policy to move the data to Amazon S3 Glacier: Less suitable due to potentially higher initial storage costs and added complexity in managing data movement.
D. Use Amazon Storage Gateway to back up to Amazon S3 and create a lifecycle policy to move the backup to Amazon S3 Glacier: Less suitable due to similar reasons as Option C, including higher initial storage costs and the need for ongoing policy management.
A company requires a durable backup storage solution for its on-premises database servers while ensuring on-premises applications maintain access to these backups for quick recovery. The company will use AWS storage services as the destination for these backups. A solutions architect is designing a solution with minimal operational overhead. Which solution should the solutions architect implement?
A. Deploy an AWS Storage Gateway file gateway on-premises and associate it with an Amazon S3 bucket.
B. Back up the databases to an AWS Storage Gateway volume gateway and access it using the Amazon S3 API.
C. Transfer the database backup files to an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 instance.
D. Back up the database directly to an AWS Snowball device and use lifecycle rules to move the data to Amazon S3 Glacier Deep Archive.
Option B (backing up to an AWS Storage Gateway volume gateway) is also a valid approach but may introduce additional complexity in managing storage volumes compared to a file gateway.
Option C (transferring backup files to an Amazon EBS volume) is not as scalable for long-term backups, and it may not provide the same level of durability as Amazon S3.
Option D (using AWS Snowball) is more suitable for offline data transfer and initial large-scale data migration, but it may not be as seamless for ongoing backup and recovery operations as the file gateway solution.
A company decides to migrate its three-tier web application from on-premises to the AWS Cloud. The new database must be capable of dynamically scaling storage capacity and performing table joins.
Which AWS service meets these requirements?
A. Amazon Aurora
B. Amazon RDS for SqlServer
C. Amazon DynamoDB Streams
D. Amazon DynamoDB on-demand
Amazon RDS for SQL Server (option B) is also a managed relational database service, but it may not provide the same level of performance and scalability as Amazon Aurora. Additionally, the question does not specify the need for SQL Server compatibility.
Amazon DynamoDB Streams (option C) and Amazon DynamoDB on-demand (option D) are NoSQL database services and do not support table joins in the same way as a relational database system. They are not the best fit for this specific requirement.
A company mandates that an Amazon S3 gateway endpoint must allow traffic to trusted buckets only. Which method should a solutions architect implement to meet this requirement?
A. Create a bucket policy for each of the company’s trusted S3 buckets that allows traffic only from the company’s trusted VPCs.
B. Create a bucket policy for each of the company’s trusted S3 buckets that allows traffic only from the company’s S3 gateway endpoint IDs.
C. Create an S3 endpoint policy for each of the company’s S3 gateway endpoints that blocks access from any VPC other than the company’s trusted VPCs.
D. Create an S3 endpoint policy for each of the company’s S3 gateway endpoints that provides access to the Amazon Resource Name (ARN) of the trusted S3 buckets.
Options A, B, and C do not provide the same level of granularity and control over access to specific buckets as using an S3 endpoint policy.
A company is using a VPC peering strategy to connect its VPCs in a single Region to allow for cross-communication. A recent increase in account creations and VPCs has made it difficult to maintain the VPC peering strategy, and the company expects to grow to hundreds of VPCs. There are also new requests to create site-to-site VPNs with some of the VPCs. A solutions architect has been tasked with creating a centrally managed networking setup for multiple accounts, VPCs, and VPNs. Which networking solution meets these requirements?
A. Configure shared VPCs and VPNs and share to each other.
B. Configure a hub-and-spoke VPC and route all traffic through VPC peering.
C. Configure an AWS Direct Connect connection between all VPCs and VPNs.
D. Configure a transit gateway with AWS Transit Gateway and connect all VPCs and VPNs.
Option A
Shared VPCs: While shared VPCs can simplify certain aspects of network management, they can become complex and may not be suitable for connecting VPCs across multiple AWS accounts.
VPN Integration: This option does not directly address the requirement for site-to-site VPNs, which are needed for connecting VPCs to on-premises networks.
Option B
Hub-and-Spoke VPC: This approach can work for some scenarios but doesn’t provide the same level of centralized management as AWS Transit Gateway.
VPC Peering: Managing VPC peering connections can become challenging at scale and doesn’t address the requirement for site-to-site VPNs.
Option C
AWS Direct Connect: While Direct Connect can provide dedicated network connections to AWS, it doesn’t directly address the requirements for VPC-to-VPC connectivity or centralized management of VPCs and VPNs. It’s typically used for connecting on-premises data centers to AWS.
A solutions architect is helping a developer design a new ecommerce shopping cart application using AWS services. The developer is unsure of the current database schema and expects to make changes as the ecommerce site grows. The solution needs to be highly resilient and capable of automatically scaling read and write capacity. Which database solution meets these requirements?
A. Amazon Aurora PostgreSQL
B. Amazon DynamoDB with on-demand enabled
C. Amazon DynamoDB with DynamoDB Streams enabled
D. Amazon SQS and Amazon Aurora PostgreSQL
Amazon Aurora PostgreSQL (option A) is a solid relational database option, but it may involve more complexity and manual management when it comes to scaling and schema changes compared to DynamoDB with on-demand capacity.
While DynamoDB Streams (option C) can be useful for real-time change tracking, it’s not directly related to the requirement for automatically scaling capacity.
Using Amazon SQS (option D) for a shopping cart application is primarily for message queuing and decoupling components, and it’s not a direct replacement for a database. It can complement a database solution but wouldn’t be the primary choice for storing shopping cart data.
A solutions architect must migrate a Windows internet information Services (IIS) web application to AWS. The application currently relies on a file share hosted in the user’s on-premises network-attached storage (NAS). The solutions architected has proposed migrating the IIS web servers to Amazon EC2 instances in multiple Availability Zones that are connected to the storage solution, and configuring an Elastic Load Balancer attached to the instances.
Which replacement to the on-premises file share is MOST resilient and durable?
A. Migrate the file Share to Amazon RDS.
B. Migrate the file Share to AWS Storage Gateway
C. Migrate the file Share to Amazon FSx for Windows File Server.
D. Migrate the file share to Amazon Elastic File System (Amazon EFS)
While other options like AWS Storage Gateway (option B), Amazon Elastic File System (Amazon EFS) (option D), and Amazon RDS (option A) have their use cases, they may not provide the same level of durability, resilience, and seamless integration with Windows workloads as Amazon FSx does in this scenario.
A company needs to implement a relational database with a multi-Region disaster recovery Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of 1 minute. Which AWS solution can achieve this?
A. Amazon Aurora Global Database
B. Amazon DynamoDB global tables
C. Amazon RDS for MySQL with Multi-AZ enabled
D. Amazon RDS for MySQL with a cross-Region snapshot copy
Option A (Amazon Aurora Global Database) is a great choice for high availability within a single Region but does not provide multi-Region replication.
Option C (Amazon RDS for MySQL with Multi-AZ enabled) offers high availability within a single Region but does not provide multi-Region disaster recovery capabilities.
Option D (Amazon RDS for MySQL with a cross-Region snapshot copy) allows for cross-Region backups but does not provide the same low RPO and RTO as DynamoDB global tables for real-time data replication.
A company runs a web service on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across two Availability Zones. The company needs a minimum of four instances at all times to meet the required service level agreement (SLA) while keeping costs low.
If an Availability Zone fails, how can the company remain compliant with the SLA?
A. Add a target tracking scaling policy with a short cooldown period.
B. Change the Auto Scaling group launch configuration to use a larger instance type.
C. Change the Auto Scaling group to use six servers across three Availability Zones.
D. Change the Auto Scaling group to use eight servers across two Availability Zones.
Option A (Add a target tracking scaling policy with a short cooldown period) may help with automatic scaling but does not address the requirement for maintaining a minimum of four instances during an Availability Zone failure.
Option B (Change the Auto Scaling group launch configuration to use a larger instance type) focuses on instance type rather than Availability Zone redundancy.
Option D (Change the Auto Scaling group to use eight servers across two Availability Zones) would not provide adequate redundancy in case of an Availability Zone failure, and it also exceeds the minimum requirement of four instances at all times, which might increase costs unnecessarily.
A company is reviewing its AWS Cloud deployment to ensure its data is not accessed by anyone without appropriate authorization. A solutions architect is tasked with identifying all open Amazon S3 buckets and recording any S3 bucket configuration changes.
What should the solutions architect do to accomplish this?
A. Enable AWS Config service with the appropriate rules
B. Enable AWS Trusted Advisor with the appropriate checks.
C. Write a script using an AWS SDK to generate a bucket report
D. Enable Amazon S3 server access logging and configure Amazon CloudWatch Events.
Option B (Enable AWS Trusted Advisor) focuses on cost and security best practices but does not provide detailed visibility into S3 bucket configurations.
Option C (Write a script using an AWS SDK) would require manual effort and ongoing maintenance to monitor changes and identify open S3 buckets.
Option D (Enable Amazon S3 server access logging and configure Amazon CloudWatch Events) is primarily focused on access logging and monitoring, but it doesn’t provide a comprehensive solution for identifying open buckets and tracking configuration changes. AWS Config is a better choice for this purpose.