sthithapragnakk -- SAA Exam Dumps Jan 24 COPY Flashcards
773 # A company wants to deploy its containerized application workloads in a VPC across three availability zones. The business needs a solution that is highly available across all availability zones. The solution should require minimal changes to the application. Which solution will meet these requirements with the LESS operating overhead?
A. Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS service auto-scaling to use target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with an availability zone attribute.
B. Use Amazon Elastic Kubernetes Service (Amazon EKS) self-managed nodes. Configure application auto-scaling to use target tracking scaling. Set the minimum capacity to 3.
C. Use Amazon EC2 Reserved Instances. Start three EC2 instances in a propagation placement group. Configure an auto-scaling group to use target tracking scaling. Set the minimum capacity to 3.
D. Use an AWS Lambda function. Configure the Lambda function to connect to a VPC. Configure application auto-scaling to use Lambda as a scalable target. Set the minimum capacity to 3.
A. Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS service auto-scaling to use target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with an availability zone attribute.
This option involves using ECS for container orchestration. Amazon ECS Service Auto Scaling allows you to automatically adjust the number of tasks running on a service. Setting the task placement strategy to be “spread” with an Availability Zone attribute ensures that tasks are distributed equally across Availability Zones. This solution is designed for high availability with minimal application changes.
774 # A media company stores movies on Amazon S3. Each movie is stored in a single video file ranging from 1 GB to 10 GB in size. The company must be able to provide streaming content for a movie within 5 minutes of a user purchasing it. There is a greater demand for films less than 20 years old than for films more than 20 years old. The company wants to minimize the costs of the hosting service based on demand. What solution will meet these requirements?
A. Store all media in Amazon S3. Use S3 lifecycle policies to move media data to the infrequent access tier when demand for a movie decreases.
B. Store newer movie video files in S3 Standard. Store older movie video files in S3 Standard-Infrequent Access (S3 Standard-IA). When a user requests an older movie, recover the video file using standard retrieval.
C. Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible Retrieval. When a user requests an older movie, recover the video file using expedited retrieval.
D. Store newer movie video files in S3 Standard. Store older movie video files in S3 Glacier Flexible Retrieval. When a user requests an older movie, recover the video file using bulk retrieval.
C. Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible Retrieval. When a user requests an older movie, recover the video file using expedited retrieval.
This option uses S3 Intelligent-Tiering for newer movies, automatically optimizing costs based on access patterns. Older movies are stored in S3 Glacier Flexible Retrieval, and accelerated retrieval is used when a user requests an older movie. Accelerated recovery on S3 Glacier typically provides data recovery times in 1-5 minutes, making it suitable for meeting the 5-minute recovery requirement.
775 # A solutions architect needs to design the architecture of an application that a vendor provides as a Docker container image. The container needs 50 GB of available storage for temporary files. The infrastructure must be serverless. Which solution meets these requirements with the LESS operating overhead?
A. Create an AWS Lambda function that uses the Docker container image with a volume mounted on Amazon S3 that has more than 50 GB of space.
B. Create an AWS Lambda function that uses the Docker container image with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type. Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume. Create a service with that task definition.
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the Amazon EC2 launch type with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space. Create a task definition for the container image. Create a service with that task definition.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type. Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume. Create a service with that task definition.
Key here is that it requires 50GB. REMEMBER Lambda supports images up to 10 GB in size. Ephemeral Storage Restrictions
Lambdas have limited temporary storage capacity for the ephemeral directory /tmp. You can increase the default size of 512 MB up to 10 GB https://blog.awsfundamentals.com/lambda-limitations
This option involves using Amazon ECS with Fargate, a serverless computing engine for containers. Using Amazon EFS enables persistent storage across multiple containers and instances. This approach meets the requirement of providing 50GB of storage and is serverless as it uses Fargate.
AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-task-storage.html When provisioned, each Amazon ECS task hosted on AWS Fargate receives the following ephemeral storage (temporary file storage) for bind mounts. This can be mounted and shared among containers using the volumes, mountPoints and volumesFrom parameters in the task definition.
776 # A company needs to use its on-premises LDAP directory service to authenticate its users to the AWS Management Console. The directory service does not support Security Assertion Markup Language (SAML). Which solution meets these requirements?
A. Enable AWS IAM (AWS Single Sign-On) Identity Center between AWS and on-premises LDAP.
B. Create an IAM policy that uses AWS credentials and integrate the policy into LDAP.
C. Configure a process that rotates IAM credentials each time LDAP credentials are updated.
D. Develop an on-premises custom identity broker application or process that uses AWS Security Token Service (AWS STS) to obtain short-lived credentials.
D. Develop an on-premises custom identity broker application or process that uses AWS Security Token Service (AWS STS) to obtain short-lived credentials.
This option involves creating a custom on-premises identity broker application or process that communicates with AWS Security Token Service (STS) to obtain short-lived credentials. This custom solution acts as an intermediary between the on-premises LDAP directory and AWS. Provides a way to obtain temporary security credentials without requiring direct LDAP support. This is a common approach for scenarios where SAML is not an option.
**Option A: Enable AWS IAM (AWS Single Sign-On) Identity Center between AWS and on-premises LDAP. ** - Explanation: AWS Single Sign-On (SSO) is designed to simplify AWS access management for enterprise users and administrators. Supports integration with local directories, but primarily uses SAML for federation. Since the local LDAP directory does not support SAML, option A may not be suitable for the given scenario. **Option B: Create an IAM policy that uses AWS credentials and integrate the policy into LDAP. ** - Explanation: This option suggests creating an IAM policy that uses AWS credentials and integrating it into LDAP. However, AWS IAM policies are typically associated with AWS identities, they are not integrated directly into LDAP. This option does not align with common practices for federated authentication. **Option C: Configure a process that rotates IAM credentials each time LDAP credentials are updated. ** - Explanation: Rotating IAM credentials every time LDAP credentials are updated introduces complexity and operational overhead. Additionally, IAM credentials are typically long-lived, and this approach does not provide the typical single sign-on (SSO) experience that federated authentication solutions offer.
777 # A company stores multiple Amazon Machine Images (AMIs) in an AWS account to launch its Amazon EC2 instances. AMIs contain critical data and configurations that are necessary for business operations. The company wants to implement a solution that recovers accidentally deleted AMIs quickly and efficiently. Which solution will meet these requirements with the LESS operating overhead?
A. Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store snapshots in a separate AWS account.
B. Copy all AMIs to another AWS account periodically.
C. Create a Recycle Bin retention rule.
D. Upload the AMIs to an Amazon S3 bucket that has cross-region replication.
C. Create a Recycle Bin retention rule.
778 # A company has 150TB of archived image data stored on-premises that needs to be moved to the AWS cloud within the next month. The company’s current network connection allows uploads of up to 100 Mbps for this purpose only during the night. What is the MOST cost effective mechanism to move this data and meet the migration deadline?
A. Use AWS Snowmobile to send data to AWS.
B. Order multiple AWS Snowball devices to send data to AWS.
C. Enable Amazon S3 transfer acceleration and upload data securely.
D. Create an Amazon S3 VPC endpoint and establish a VPN to upload the data.
B. Order multiple AWS Snowball devices to send data to AWS.
779 # A company wants to migrate its three-tier application from on-premises to AWS. The web tier and application tier run on third-party virtual machines (VMs). The database tier is running on MySQL. The company needs to migrate the application by making as few architectural changes as possible. The company also needs a database solution that can restore data to a specific point in time. Which solution will meet these requirements with the LESS operating overhead?
A. Migrate the web tier and application tier to Amazon EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL on private subnets.
B. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon Aurora MySQL in private subnets.
C. Migrate the web tier to Amazon EC2 instances on public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL on private subnets.
D. Migrate the web tier and application tier to Amazon EC2 instances on public subnets. Migrate the database tier to Amazon Aurora MySQL on public subnets.
B. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon Aurora MySQL in private subnets.
This option introduces Amazon Aurora MySQL for the database tier, which is a fully managed relational database service compatible with MySQL. Aurora supports Timely recovery. While it adds a managed service, it also requires changes to the database technology, which can introduce some operational considerations.
Aurora provides automated backup and point-in-time recovery, simplifying backup management and data protection. Continuous incremental backups are taken automatically and stored in Amazon S3, and data retention periods can be specified to meet compliance requirements.
NOTE: **Option A: Migrate the web tier and application tier to Amazon EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL on private subnets. ** - Explanation: This option migrates the web and application tier to EC2 instances and the database tier to Amazon RDS for MySQL. RDS for MySQL provides point-in-time recovery capabilities, allowing you to restore the database to a specific point in time. This option minimizes architectural changes and operational overhead while using managed services for the database.
780 # A development team is collaborating with another company to create an integrated product. The other company needs to access an Amazon Simple Queue Service (Amazon SQS) queue that is contained in the development team’s account. The other company wants to poll the queue without giving up its own account permissions to do so. How should a solutions architect provide access to the SQS queue?
A. Create an instance profile that provides the other company with access to the SQS queue.
B. Create an IAM policy that gives the other company access to the SQS queue.
C. Create an SQS access policy that provides the other company with access to the SQS queue.
D. Create an Amazon Simple Notification Service (Amazon SNS) access policy that provides the other company with access to the SQS queue.
C. Create an SQS access policy that provides the other company with access to the SQS queue.
SQS access policies are specifically designed to control access to SQS resources. You can create an SQS access policy that allows the other company’s AWS account Or specific identities to access the SQS queue. This is a suitable option for sharing access to an SQS queue across all accounts.
Summary: - Option B (Create an IAM policy) and Option C (Create an SQS access policy) are valid and common approaches to granting cross-account access to an SQS queue . Choosing between them may depend on factors such as whether you want to manage access through IAM or directly through SQS policies. Both options allow you to grant fine-grained permissions for the other company to poll the SQS queue without exposing broader permissions in your AWS account.
781 # A company’s developers want a secure way to gain SSH access to the company’s Amazon EC2 instances running the latest version of Amazon Linux. Developers work remotely and in the corporate office. The company wants to use AWS services as part of the solution. EC2 instances are hosted in a private VPC subnet and access the Internet through a NAT gateway that is deployed on a public subnet. What should a solutions architect do to meet these requirements in the most cost-effective way?
A. Create a bastion host on the same subnet as the EC2 instances. Grant the ec2:CreateVpnConnection IAM permission to developers. Install EC2 Instance Connect so that developers can connect to EC2 instances.
B. Create an AWS Site-to-Site VPN connection between the corporate network and the VPC. Instruct developers to use the site-to-site VPN connection to access EC2 instances when the developers are on the corporate network. Instruct developers to set up another VPN connection to access when working remotely.
C. Create a bastion host on the VP public subnet. Configure the bastion host’s security groups and SSH keys to only allow SSH connections and authentication from developers’ remote and corporate networks. Instruct developers to connect through the bastion host using SSH to reach the EC2 instances.
D. Attach the AmazonSSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2 instances. Instruct developers to use AWS Systems Manager Session Manager to access EC2 instances.
D. Attach the AmazonSSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2 instances. Instruct developers to use AWS Systems Manager Session Manager to access EC2 instances.
This option involves using AWS Systems Manager Session Manager, which provides a secure and auditable way to access EC2 instances. Eliminates the need for a bastion host and allows access directly through the AWS Management Console. This can be a cost-effective and efficient solution.
Summary: - Option A (Create a bastion host with EC2 Instance Connect), Option Create a bastion host in the public subnet) and Option D (Use AWS Systems Manager Session Manager) are all viable options for secure SSH access. - Option D (AWS Systems Manager Session Manager) is often considered a cost-effective and secure solution without the need for a separate bastion host. Simplifies access and provides audit trails.
- Option A and Option C involve bastion hosts, but have different implementation details. Option A focuses on EC2 Instance Connect, while Option C uses a traditional bastion host with restricted access. Conclusion: - Option D (AWS Systems Manager Session Manager) is probably the most cost-effective and operationally efficient solution for secure SSH access to EC2 instances on a private subnet. It aligns with AWS best practices and simplifies management without the need for a separate bastion host.
782 # A pharmaceutical company is developing a new medicine. The volume of data that the company generates has grown exponentially in recent months. The company’s researchers regularly require that a subset of the entire data set be made available immediately with minimal delay. However, it is not necessary to access the entire data set daily. All data currently resides on local storage arrays, and the company wants to reduce ongoing capital expenditures. Which storage solution should a solutions architect recommend to meet these requirements?
A. Run AWS DataSync as a scheduled cron job to migrate data to an Amazon S3 bucket continuously.
B. Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as target storage. Migrate data to the Storage Gateway appliance.
C. Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as target storage. Migrate data to the Storage Gateway appliance.
D. Configure an AWS site-to-site VPN connection from the on-premises environment to AWS. Migrate data to an Amazon Elastic File System (Amazon EFS) file system.
C. Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as target storage. Migrate data to the Storage Gateway appliance.
This option involves using Storage Gateway with cached volumes, storing frequently accessed data locally for low-latency access, and asynchronously backing up the entire data set to Amazon S3.
- For the specific requirement of having a subset of the data set immediately available with minimal delay, Option C (Storage Gate Volume Gateway with Cached Volumes) appears to be well aligned. Supports low-latency access to frequently accessed data stored on-premises, while ensuring durability of the overall data set in Amazon S3.
783 # A company has a business-critical application running on Amazon EC2 instances. The application stores data in an Amazon DynamoDB table. The company must be able to revert the table to any point within the last 24 hours. Which solution meets these requirements with the LESS operating overhead?
A. Configure point-in-time recovery for the table.
B. Use AWS Backup for the table.
C. Use an AWS Lambda function to make an on-demand backup of the table every hour.
D. Turn on streams on the table to capture a log of all changes to the table in the last 24 hours. Store a copy of the stream in an Amazon S3 bucket.
A. Configure point-in-time recovery for the table.
784 # A company hosts an application that is used to upload files to an Amazon S3 bucket. Once uploaded, files are processed to extract metadata, which takes less than 5 seconds. The volume and frequency of uploads varies from a few files every hour to hundreds of simultaneous uploads. The company has asked a solutions architect to design a cost-effective architecture that meets these requirements. What should the solutions architect recommend?
A. Configure AWS CloudTrail trails to record S3 API calls. Use AWS AppSync to process the files.
B. Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the files.
C. Configure Amazon Kinesis data streams to process and send data to Amazon S3. Invokes an AWS Lambda function to process the files.
D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to process files uploaded to Amazon S3. Invokes an AWS Lambda function to process the files.
B. Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the files.
- This option leverages Amazon S3 event notifications to trigger an AWS Lambda function when an object (file) is created in the S3 bucket.
- AWS Lambda provides a serverless computing service, enabling code execution without the need to provision or manage servers.
- Lambda can be programmed to process the files, extract metadata and perform any other necessary tasks.
- Lambda can automatically scale based on the number of incoming events, making it suitable for variable uploads, from a few files per hour to hundreds of simultaneous uploads.
785 # An enterprise application is deployed on Amazon EC2 instances and uses AWS Lambda functions for an event-driven architecture. The company uses non-production development environments in a different AWS account to test new features before the company deploys the features to production. Production instances show constant usage due to clients in different time zones. The company uses non-production instances only during business hours Monday through Friday. The company does not use non-production instances on weekends. The company wants to optimize costs for running its application on AWS. Which solution will meet these requirements in the MOST cost-effective way?
A. Use on-demand instances for production instances. Use dedicated hosts for non-production instances only on weekends.
B. Use reserved instances for production and non-production instances. Shut down non-production instances when they are not in use.
C. Use compute savings plans for production instances. Use on-demand instances for non-production instances. Shut down non-production instances when they are not in use.
D. Use dedicated hosts for production instances. Use EC2 instance savings plans for non-production instances.
C. Use compute savings plans for production instances. Use on-demand instances for non-production instances. Shut down non-production instances when they are not in use.
- Compute Savings Plans provide significant cost savings for a commitment to a constant amount of compute usage (measured in $/hr) over a 1 or 3 year term. This is suitable for production instances that show constant usage.
- Using on-demand instances for non-production instances allows for flexibility without compromise, and shutting down non-production instances when they are not in use helps minimize costs.
- This approach takes advantage of the cost-effectiveness of savings plans for predictable workloads and the flexibility of on-demand instances for sporadic use.
786 # A company stores data in an on-premises Oracle relational database. The company needs the data to be available in Amazon Aurora PostgreSQL for analysis. The company uses an AWS site-to-site VPN connection to connect its on-premises network to AWS. The company must capture changes that occur to the source database during migration to Aurora PostgreSQL. What solution will meet these requirements?
A. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to an Aurora PostgreSQL schema. Use the AWS Database Migration Service (AWS DMS) full load migration task to migrate the data.
B. Use AWS DataSync to migrate data to an Amazon S3 bucket. Import data from S3 to Aurora PostgreSQL using the Aurora PostgreSQL aws_s3 extension.
C. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to an Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate existing data and replicate ongoing changes.
D. Use an AWS Snowball device to migrate data to an Amazon S3 bucket. Import data from S3 to Aurora PostgreSQL using the Aurora PostgreSQL aws_s3 extension.
C. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to an Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate existing data and replicate ongoing changes.
- AWS Schema Conversion Tool (AWS SCT): This tool helps convert the source database schema to a format compatible with the target database. In this case, it will help to convert the Oracle schema to an Aurora PostgreSQL schema. - AWS Database Migration Service (AWS DMS):
- Full Load Migration: Can be used initially to migrate existing data from on-premises Oracle database to Aurora PostgreSQL.
- Ongoing Change Replication: AWS DMS can be configured for continuous replication, capturing changes to the source database and applying them to the target Aurora PostgreSQL database. This ensures that changes made to the Oracle database during the migration process are also reflected in Aurora PostgreSQL.
787 # A company built an application with Docker containers and needs to run the application in the AWS cloud. The company wants to use a managed service to host the application. The solution must scale appropriately according to the demand for individual container services. The solution should also not result in additional operational overhead or infrastructure to manage. What solutions will meet these requirements? (Choose two.)
A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
B. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate.
C. Provision an Amazon API Gateway API. Connect the API to AWS Lambda to run the containers.
D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes.
A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
C. Provision an Amazon API Gateway API. Connect the API to AWS Lambda to run the containers.
- AWS Fargate is a serverless compute engine for containers, eliminating the need to manage the underlying EC2 instances.
- Automatically scales to meet application demand without manual intervention.
- Abstracts infrastructure management, providing a serverless experience for containerized applications.
788 # An e-commerce company is running a seasonal online sale. The company hosts its website on Amazon EC2 instances that span multiple availability zones. The company wants its website to handle traffic surges during the sale. Which solution will meet these requirements in the MOST cost-effective way?
A. Create an auto-scaling group that is large enough to handle the maximum traffic load. Stop half of your Amazon EC2 instances. Configure the auto-scaling group to use stopped instances to scale when traffic increases.
B. Create an Auto Scaling group for the website. Set the minimum auto-scaling group size so that it can handle large volumes of traffic without needing to scale.
C. Use Amazon CloudFront and Amazon ElastiCache to cache dynamic content with an auto-scaling group set as the origin. Configure the Auto Scaling group with the instances necessary to populate CloudFront and ElastiCache. Scales after the cache is completely full.
D. Configure an auto-scaling group to scale as traffic increases. Create a launch template to start new instances from a preconfigured Amazon Machine Image (AMI).
D. Configure an auto-scaling group to scale as traffic increases. Create a launch template to start new instances from a preconfigured Amazon Machine Image (AMI).
Provides elasticity, automatically scaling to handle increased traffic. The launch template allows for consistent instance configuration.
In summary, while each option has its merits, Option D, with its focus on dynamic scaling using auto-scaling and a launch template, is often preferred for its balance of cost-effectiveness and responsiveness to different traffic patterns. . Aligns with best practices for scaling web applications on AWS.
789 # A solutions architect must provide an automated solution for an enterprise’s compliance policy that states that security groups cannot include a rule that allows SSH starting at 0.0.0.0/0. It is necessary to notify the company if there is any violation in the policy.
A solution is needed as soon as possible. What should the solutions architect do to meet these requirements with the least operational overhead?
A. Write an AWS Lambda script that monitors security groups so that SSH is open to 0.0.0.0/0 addresses and creates a notification whenever it finds one.
B. Enable the restricted ssh AWS Config managed rule and generate an Amazon Simple Notification Service (Amazon SNS) notification when a non-compliant rule is created.
C. Create an IAM role with permissions to globally open security groups and network ACLs. Create an Amazon Simple Notification Service (Amazon SNS) topic to generate a notification each time a user assumes the role.
D. Configure a service control policy (SCP) that prevents non-administrative users from creating or editing security groups. Create a notification in the ticket system when a user requests a rule that requires administrator permissions.
B. Enable the restricted ssh AWS Config managed rule and generate an Amazon Simple Notification Service (Amazon SNS) notification when a non-compliant rule is created.
Takes advantage of the AWS Config managed rule, minimizing manual scripting. Config provides automated compliance checks.
790 # Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes. A company has deployed an application to an AWS account. The application consists of microservices running on AWS Lambda and Amazon Elastic Kubernetes Service (Amazon EKS). A separate team supports each microservice. The company has multiple AWS accounts and wants to give each team their own account for their microservices. A solutions architect needs to design a solution that provides service-to-service communication over HTTPS (port 443). The solution must also provide a service registry for service discovery. Which solution will meet these requirements with the LEAST administrative overhead?
A. Create an inspection VPC. Deploy an AWS Network Firewall firewall in the inspection VPC. Attach the inspection VPC to a new transit gateway. Routes VPC-to-VPC traffic to the inspection VPC. Apply firewall rules to allow only HTTPS communication.
B. Create a VPC Lattice service network. Associate the microservices with the service network. Define HTTPS listeners for each service. Register microservices computing resources as targets. Identify VPCs that need to communicate with the services. Associate those VPCs with the service network.
C. Create a network load balancer (NLB) with an HTTPS listener and target groups for each microservice. Create an AWS PrivateLink endpoint service for each microservice. Create a VPC interface endpoint in each VPC that needs to consume that microservice.
D. Create peering connections between VPCs that contain microservices. Create a list of prefixes for each service that requires a connection to a client. Create route tables to route traffic to the appropriate VPC. Create security groups to allow only HTTPS communication.
B. Create a VPC Lattice service network. Associate the microservices with the service network. Define HTTPS listeners for each service. Register microservices computing resources as targets. Identify VPCs that need to communicate with the services. Associate those VPCs with the service network.
- Uses a network of services for association and communication.
- Specific HTTPS listeners and targets for each service.
Taking into account the limitations and the need for the least administrative overhead, option B provides a decentralized approach with a network of services. While it may involve some initial configuration, it allows for specific association and communication between microservices.
791 # A company has a mobile game that reads most of its metadata from an Amazon RDS DB instance. As the game increased in popularity, developers noticed slowdowns related to game metadata loading times. Performance metrics indicate that simply scaling the database will not help. A solutions architect should explore all options including capabilities for snapshots, replication, and sub-millisecond response times. What should the solutions architect recommend to solve these problems?
A. Migrate the database to Amazon Aurora with Aurora Replicas.
B. Migrate the database to Amazon DynamoDB with global tables.
C. Add an Amazon ElastiCache for Redis layer in front of the database.
D. Add an Amazon ElastiCache layer for Memcached in front of the database.
B. Migrate the database to Amazon DynamoDB with global tables.
- DynamoDB is designed for low latency access and can provide sub-millisecond response times.
- Global tables offer multi-region replication for high availability.
- DynamoDB’s architecture and features are well suited for scenarios with strict performance expectations.
Other Considerations:
A. Migrate the database to Amazon Aurora with Aurora Replicas:
- Aurora is known for its high performance, but sub-millisecond response times may not be guaranteed in all scenarios.
- Aurora replicas provide read scalability, but may not meet the submillisecond requirement.
C. Add an Amazon ElastiCache for Redis layer in front of the database: - ElastiCache for Redis is an in-memory caching solution. - While it may improve read performance, it may not guarantee sub-millisecond response times for all use cases.
D. Add a layer of Amazon ElastiCache for Memcached in front of the database: - Similar to Redis, ElastiCache for Memcached is a caching solution. - Caching may improve read performance, but may not guarantee sub-millisecond response times.
792 # A company uses AWS Organizations for its multi-account AWS setup. The enterprise security organizational unit (OU) needs to share approved Amazon Machine Images (AMIs) with the development OU. AMIs are created by using encrypted AWS Key Management Service (AWS KMS) snapshots. What solution will meet these requirements? (Choose two.)
A. Add the development team’s OU Amazon Resource Name (ARN) to the list of release permissions for AMIs.
B. Add the organizations root Amazon Resource Name (ARN) to the launch permissions list for AMIs.
C. Update the key policy to allow the development team’s OU to use the AWS KMS keys that are used to decrypt snapshots.
D. Add the Amazon Resource Name (ARN) development team account to the list of launch permissions for AMIs.
E. Recreate the AWS KMS key. Add a key policy to allow the root of Amazon Resource Name (ARN) organizations to use the AWS KMS key.
A. Add the development team’s OU Amazon Resource Name (ARN) to the list of release permissions for AMIs.
C. Update the key policy to allow the development team’s OU to use the AWS KMS keys that are used to decrypt snapshots.
- Option A: - Add the Amazon Resource Name (ARN) of the development team’s OU to the launch permissions list for AMIs:
- Explanation: This option is relevant to control who can start AMIs. By adding the development team’s OU to release permissions, you give them the ability to use AMIs.
- Fits the requirement: Share AMI.
Option C:
- Update key policy to allow the development team OU to use AWS KMS keys that are used to decrypt snapshots:
- Explanation: This option addresses decryption permissions. If you want your development team’s OU to use AWS KMS keys to decrypt snapshots (required to launch AMIs), adjusting the key policy is the right approach.
- Fits the requirement: Share encrypted snapshots.
793 # A data analysis company has 80 offices that are distributed worldwide. Each office hosts 1 PB of data and has between 1 and 2 Gbps of Internet bandwidth. The company needs to perform a one-time migration of a large amount of data from its offices to Amazon S3. The company must complete the migration within 4 weeks. Which solution will meet these requirements in the MOST cost-effective way?
A. Establish a new 10 Gbps AWS Direct Connect connection to each office. Transfer the data to Amazon S3.
B. Use multiple AWS Snowball Edge storage-optimized devices to store and transfer data to Amazon S3.
C. Use an AWS snowmobile to store and transfer the data to Amazon S3.
D. Configure an AWS Storage Gateway Volume Gateway to transfer data to Amazon S3.
B. Use multiple AWS Snowball Edge storage-optimized devices to store and transfer data to Amazon S3.
- Considerations: This option can be cost-effective and efficient, especially when dealing with large data sets. Take advantage of physical transportation, reducing the impact on Internet bandwidth.
Other Options:
- Option C: Use an AWS Snowmobile:
- Explanation: AWS Snowmobile is a high-capacity data transfer service that involves a secure shipping container. It is designed for massive data migrations.
- Considerations: While Snowmobile is efficient for extremely large data volumes, it could be overkill for the described scenario of 80 offices with 1 PB of data each.
794 # A company has an Amazon Elastic File System (Amazon EFS) file system that contains a set of reference data. The company has applications on Amazon EC2 instances that need to read the data set. However, applications should not be able to change the data set. The company wants to use IAM access control to prevent applications from modifying or deleting the data set. What solution will meet these requirements?
A. Mount the EFS file system in read-only mode from within the EC2 instances.
B. Create a resource policy for the EFS file system that denies the elasticfilesystem:ClientWrite action to IAM roles that are attached to EC2 instances.
C. Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the EFS file system.
D. Create an EFS access point for each application. Use Portable Operating System Interface (POSIX) file permissions to allow read-only access to files in the root directory.
C. Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the EFS file system.
- Option C: Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the EFS file system:
- This option is also aligned with IAM access control, denying actions using identity policies.
- Option C is a valid option to control modifications through IAM
Other Options:
- Option B: Create a resource policy for the EFS file system that denies the elasticfilesystem:ClientWrite action to IAM roles that are associated with EC2 instances:
- This option involves IAM roles and policies of resources, aligning with the IAM access control requirement.
- Option B is a valid option to use IAM access control to prevent modifications.
795 # A company has hired a third-party vendor to perform work on the company’s AWS account. The provider uses an automated tool that is hosted in an AWS account that the provider owns. The provider does not have IAM access to the company’s AWS account. The company must grant the provider access to the company’s AWS account. Which solution will MOST securely meet these requirements?
A. Create an IAM role in the company account to delegate access to the provider IAM role. Attach the appropriate IAM policies to the role for the permissions the provider requires.
B. Create an IAM user in the company account with a password that meets the password complexity requirements. Attaches the appropriate IAM policies to the user for the permissions the provider requires.
C. Create an IAM group in the company account. Adds the automated tool IAM user from the provider account to the group. Attach the appropriate IAM policies to the group for the permissions that the provider requires.
D. Create an IAM user in the company account that has a permission limit that the provider account allows. Attaches the appropriate IAM policies to the user for the permissions the provider requires.
A. Create an IAM role in the company account to delegate access to the provider IAM role. Attach the appropriate IAM policies to the role for the permissions the provider requires.
- Explanation: This option involves creating a cross-account IAM role to delegate access to the provider IAM role. The role will have policies attached for the required permissions.
- Security: This is a secure approach as it follows the principle of least privilege and uses cross-account roles for access.
Other Options:
- Option B: Create an IAM user in the company account with a password that meets the password complexity requirements:
- Explanation: This option involves creating a local IAM user in the account of the company with the attached policies for the required permits.
- Security: Using a local IAM user with a password could introduce security risks, and it is generally recommended to use temporary roles and credentials instead.
- Option C: Create an IAM group in the company account. Add the automated tool IAM user from the provider account to the group:
- Explanation: This option involves grouping the provider IAM user into the enterprise IAM group and attaching policies to the group for permissions.
- Security: While IAM groups are a good practice, directly adding external IAM users (from another account) to a group in the company account is less secure and may not be a best practice.
- Option D: Create an IAM user in the company account that has a permission limit that allows the provider account
- Explanation: This option involves creating an IAM User with a permission limit that allows the provider account. Policies are attached to the user to obtain the required permissions.
- Security: This approach uses permissions limits for control, but directly creating the IAM user might not be as secure as using roles.
796 # A company wants to run its experimental workloads in the AWS cloud. The company has a budget for cloud spending. The company’s CFO is concerned about the responsibility of each department’s cloud spending. The CFO wants to be notified when the spending threshold reaches 60% of the budget. What solution will meet these requirements?
A. Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets. Add an alert threshold to receive notification when spending exceeds 60% of the budget.
B. Use AWS Cost Explorer forecasts to determine resource owners. Use AWS Cost Anomaly Detection to create alert threshold notifications when spending exceeds 60% of budget.
C. Use cost allocation tags on AWS resources to tag owners. Use the AWS Support API in AWS Trusted Advisor to create alert threshold notifications when spending exceeds 60% of budget.
D. Use AWS Cost Explorer forecasts to determine resource owners. Create usage budgets in AWS Budgets. Add an alert threshold to be notified when spending exceeds 60% of budget.
A. Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets. Add an alert threshold to receive notification when spending exceeds 60% of the budget.
- Explanation: This option suggests using cost allocation tags to tag owners, create usage budgets in AWS budgets, and set an alert threshold for notification when spending exceeds 60% of budget.
- Pros: Uses cost allocation tags to identify resource owners, and AWS budgets are designed specifically for budgeting and cost tracking.
797 # A company wants to deploy an internal web application on AWS. The web application should only be accessible from the company office. The company needs to download security patches for the web application from the Internet. The company has created a VPC and configured an AWS site-to-site VPN connection to the company office. A solutions architect must design a secure architecture for the web application. What solution will meet these requirements?
A. Deploy the web application to Amazon EC2 instances on public subnets behind a public application load balancer (ALB). Connect an Internet gateway to the VPC. Set the ALB security group input source to 0.0.0.0/0.
B. Deploy the web application on Amazon EC2 instances in private subnets behind an internal application load balancer (ALB). Deploy NAT gateways on public subnets. Attach an Internet gateway to the VPC. Set the inbound source of the ALB’s security group to the company’s office network CIDR block.
C. Deploy the web application to Amazon EC2 instances on public subnets behind an internal application load balancer (ALB). Implement NAT gateways on private subnets. Connect an Internet gateway to the VPSet, the outbound destination of the ALB security group, to the CIDR block of the company’s office network.
D. Deploy the web application to Amazon EC2 instances in private subnets behind a public application load balancer (ALB). Connect an Internet gateway to the VPC. Set the ALB security group output destination to 0.0.0.0/0.
B. Deploy the web application on Amazon EC2 instances in private subnets behind an internal application load balancer (ALB). Deploy NAT gateways on public subnets. Attach an Internet gateway to the VPC. Set the inbound source of the ALB’s security group to the company’s office network CIDR block.
- Explanation: This option deploys the web application on private subnets behind an internal ALB, with NAT gateways on public subnets. Allows incoming traffic from the CIDR block of the company’s office network.
- Pros: Restricts incoming traffic to the company’s office network.
798 # A company maintains its accounting records in a custom application that runs on Amazon EC2 instances. The company needs to migrate data to an AWS managed service for development and maintenance of application data. The solution should require minimal operational support and provide immutable, cryptographically verifiable records of data changes. Which solution will meet these requirements in the MOST cost-effective way?
A. Copy the application logs to an Amazon Redshift cluster.
B. Copy the application logs to an Amazon Neptune cluster.
C. Copy the application logs to an Amazon Timestream database.
D. Copy the records from the application into an Amazon Quantum Ledger database (Amazon QLDB) ledger.
D. Copy the records from the application into an Amazon Quantum Ledger database (Amazon QLDB) ledger.
- Explanation: Amazon QLDB is designed for ledger-style applications, providing a transparent, immutable, and cryptographically verifiable record of transactions. It is suitable for use cases where an immutable and transparent record of all changes is needed.
- Pros: Designed specifically for immutable records and cryptographic verification.
799 # A company’s marketing data is loaded from multiple sources into an Amazon S3 bucket. A series of data preparation jobs aggregate the data for reporting. Data preparation jobs must be run at regular intervals in parallel. Some jobs must be run in a specific order later. The company wants to eliminate the operational overhead of job error handling, retry logic, and state management. What solution will meet these requirements?
A. Use an AWS Lambda function to process the data as soon as the data is uploaded to the S3 bucket. Invokes other Lambda functions at regularly scheduled intervals.
B. Use Amazon Athena to process the data. Use Amazon EventBridge Scheduler to invoke Athena on a regular internal.
C. Use AWS Glue DataBrew to process the data. Use an AWS Step Functions state machine to run DataBrew data preparation jobs.
D. Use AWS Data Pipeline to process the data. Schedule the data pipeline to process the data once at midnight.
C. Use AWS Glue DataBrew to process the data. Use an AWS Step Functions state machine to run DataBrew data preparation jobs.
It provides detailed control over the work order and integrates with Step Functions for workflow orchestration and management.
- Explanation: AWS Glue DataBrew can be used for data preparation, and AWS Step Functions can provide orchestration for jobs that must be executed in a specific order. Step Functions can also handle error handling, retry logic, and state management.
- Pros: Detailed control over the work order, built-in orchestration capabilities.
800 # A solutions architect is designing a payment processing application that runs on AWS Lambda in private subnets across multiple availability zones. The app uses multiple Lambda functions and processes millions of transactions every day. The architecture should ensure that the application does not process duplicate payments. What solution will meet these requirements?
A. Use Lambda to retrieve all payments due. Post payments due to an Amazon S3 bucket. Configure the S3 bucket with an event notification to invoke another Lambda function to process payments due.
B. Use Lambda to retrieve all payments due. Posts payments due to an Amazon Simple Queue Service (Amazon SQS) queue. Set up another Lambda function to poll the SQS queue and process payments due.
C. Use Lambda to retrieve all due payments. Publish the payments to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure another Lambda function to poll the FIFO queue and process payments due.
D. Use Lambda to retrieve all payments due. Store payments due in an Amazon DynamoDB table. Configure flows in the DynamoDB table to invoke another Lambda function to process payments due.
C. Use Lambda to retrieve all due payments. Publish the payments to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure another Lambda function to poll the FIFO queue and process payments due.
- Explanation: Similar to Option B, but uses an SQS FIFO queue, which provides one-time ordering and processing.
- Pros: Ensures message ordering and processing in one go.
Considering the requirement to ensure that the application does not process duplicate payments, Option C (Amazon SQS FIFO Queue) appears to be the most appropriate option. It takes advantage of the reliability, ordering and one-time processing features of an SQS FIFO queue, which align with the need to process payments without duplicates.
NOTE: Option b with regular SQS queue, Potential for message duplication if not handled correctly.
801 # A company runs multiple workloads in its on-premises data center. The company’s data center cannot scale fast enough to meet the company’s growing business needs. The company wants to collect usage and configuration data about on-premises servers and workloads to plan a migration to AWS. What solution will meet these requirements?
A. Set the starting AWS Region to AWS Migration Hub. Use AWS Systems Manager to collect data about on-premises servers.
B. Set the home AWS Region in AWS Migration Hub. Use AWS Application Discovery Service to collect data about on-premises servers.
C. Use the AWS Schema Conversion Tool (AWS SCT) to create the relevant templates. Use AWS Trusted Advisor to collect data about on-premises servers.
D. Use the AWS Schema Conversion Tool (AWS SCT) to create the relevant templates. Use the AWS Database Migration Service (AWS DMS) to collect data about on-premises servers.
B. Set the home AWS Region in AWS Migration Hub. Use AWS Application Discovery Service to collect data about on-premises servers.
AWS ADS is specifically designed to discover detailed information about servers, applications, and dependencies, providing a complete view of the on-premises environment.
802 # A company has an organization in AWS Organizations that has all features enabled. The company requires that all API calls and logins to any existing or new AWS account be audited. The company needs a managed solution to avoid additional work and minimize costs. The business also needs to know when any AWS account does not meet the AWS Foundational Security Best Practices (FSBP) standard. Which solution will meet these requirements with the LESS operating overhead?
A. Deploy an AWS control tower environment in the Organization Management account. Enable AWS Security Hub and AWS Control Tower Account Factory in your environment.
B. Deploy an AWS Control Tower environment in a dedicated Organization Member account. Enable AWS Security Hub and AWS Control Tower Account Factory in your environment.
C. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ). Submit an RFC to the Amazon GuardDuty self-service provisioning in MALZ.
D. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ). Submit an RFC to the AWS Security Hub self-service provision on the MALZ.
A. Deploy an AWS control tower environment in the Organization Management account. Enable AWS Security Hub and AWS Control Tower Account Factory in your environment.
Explanation: Deploy AWS Control Tower to the Organization Management Account, enable AWS Security Hub and AWS Control Tower Account Factory. Pros: Centralized deployment to the Organization Management account provides a more efficient way to manage and govern multiple accounts. Simplifies operations and reduces the overhead of implementing and managing the Control Tower in each separate account.
803 # A company has stored 10 TB of log files in Apache Parquet format in an Amazon S3 bucket. From time to time, the company needs to use SQL to analyze log files. Which solution will meet these requirements in the MOST cost-effective way?
A. Create an Amazon Aurora MySQL database. Migrate S3 bucket data to Aurora using AWS Database Migration Service (AWS DMS). Issue SQL statements to the Aurora database.
B. Create an Amazon Redshift cluster. Use Redshift Spectrum to execute SQL statements directly on data in your S3 bucket.
C. Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket. Use Amazon Athena to run SQL statements directly on data in your S3 bucket.
D. Create an Amazon EMR cluster. Use Apache Spark SQL to execute SQL statements directly on data in the S3 bucket.
C. Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket. Use Amazon Athena to run SQL statements directly on data in your S3 bucket.
- AWS Glue Crawler: AWS Glue can discover and store metadata about log files using a crawler. The crawler automatically identifies the schema and structure of the data in the S3 bucket, making it easy to query.
- Amazon Athena: Athena is a serverless query service that allows you to run SQL queries directly on data in Amazon S3. It supports querying data in various formats, including Apache Parquet. Since Athena is serverless, you only pay for the queries you run, making it a cost-effective solution.
Other Options: Option A (using Amazon Aurora MySQL with AWS DMS) involves unnecessary data migration and may result in increased costs and complexity. Option B (using Amazon Redshift Spectrum) introduces the overhead of managing a Redshift cluster, which might be overkill for occasional SQL analysis. Option D (Using Amazon EMR with Apache Spark SQL) involves setting up and managing an EMR cluster, which may be more complex and expensive than necessary for occasional log file queries.
804 # An enterprise needs a solution to prevent AWS CloudFormation stacks from deploying AWS Identity and Access Management (IAM) resources that include an inline policy or “*” in the declaration. The solution should also prohibit the deployment of Amazon EC2 instances with public IP addresses. The company has AWS Control Tower enabled in its organization in AWS organizations. What solution will meet these requirements?
A. Use proactive controls in AWS Control Tower to block the deployment of EC2 instances with public IP addresses and inline policies with elevated or “star” access.
B. Use AWS Control Tower detective controls to block the deployment of EC2 instances with public IP addresses and inline policies with elevated or “star” access.
C. Use AWS Config to create rules for EC2 and IAM compliance. Configure rules to run an AWS Systems Manager Session Manager automation to delete a resource when it is not supported.
D. Use a service control policy (SCP) to block actions for the EC2 instances and IAM resources if the actions lead to noncompliance.
D. Use a service control policy (SCP) to block actions for the EC2 instances and IAM resources if the actions lead to noncompliance.
- Service Control Policies (SCP): SCPs are used to set fine-grained permissions for entities in an AWS organization. They allow you to set controls over what actions are allowed or denied on your accounts. In this scenario, an SCP can be created to deny specific actions related to EC2 instances and IAM resources that have inline policies with elevated or “*” access.
805 # A company’s web application that is hosted on the AWS cloud has recently increased in popularity. The web application currently exists on a single Amazon EC2 instance on a single public subnet. The web application has not been able to meet the demand of increased web traffic. The business needs a solution that provides high availability and scalability to meet growing user demand without rewriting the web application. What combination of steps will meet these requirements? (Choose two.)
A. Replace the EC2 instance with an instance optimized for the larger compute.
B. Configure Amazon EC2 auto-scaling with multiple availability zones on private subnets.
C. Configure a NAT gateway on a public subnet to handle web requests.
D. Replace the EC2 instance with a larger memory-optimized instance.
E. Configure an application load balancer in a public subnet to distribute web traffic.
B. Configure Amazon EC2 auto-scaling with multiple availability zones on private subnets.
E. Configure an application load balancer in a public subnet to distribute web traffic.
- Amazon EC2 Auto Scaling (Option B): By configuring Auto Scaling with multiple availability zones, you ensure that your web application can automatically adjust the number of instances to handle different levels of demand. This improves availability and scalability.
- Application Load Balancer (Option E): An application load balancer (ALB) on a public subnet can distribute incoming web traffic across multiple EC2 instances. ALB is designed for high availability and can efficiently handle traffic distribution, improving the overall performance of the web application.