sthithapragnakk -- SAA Exam Dumps Jan 24 COPY Flashcards
773 # A company wants to deploy its containerized application workloads in a VPC across three availability zones. The business needs a solution that is highly available across all availability zones. The solution should require minimal changes to the application. Which solution will meet these requirements with the LESS operating overhead?
A. Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS service auto-scaling to use target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with an availability zone attribute.
B. Use Amazon Elastic Kubernetes Service (Amazon EKS) self-managed nodes. Configure application auto-scaling to use target tracking scaling. Set the minimum capacity to 3.
C. Use Amazon EC2 Reserved Instances. Start three EC2 instances in a propagation placement group. Configure an auto-scaling group to use target tracking scaling. Set the minimum capacity to 3.
D. Use an AWS Lambda function. Configure the Lambda function to connect to a VPC. Configure application auto-scaling to use Lambda as a scalable target. Set the minimum capacity to 3.
A. Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS service auto-scaling to use target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with an availability zone attribute.
This option involves using ECS for container orchestration. Amazon ECS Service Auto Scaling allows you to automatically adjust the number of tasks running on a service. Setting the task placement strategy to be “spread” with an Availability Zone attribute ensures that tasks are distributed equally across Availability Zones. This solution is designed for high availability with minimal application changes.
774 # A media company stores movies on Amazon S3. Each movie is stored in a single video file ranging from 1 GB to 10 GB in size. The company must be able to provide streaming content for a movie within 5 minutes of a user purchasing it. There is a greater demand for films less than 20 years old than for films more than 20 years old. The company wants to minimize the costs of the hosting service based on demand. What solution will meet these requirements?
A. Store all media in Amazon S3. Use S3 lifecycle policies to move media data to the infrequent access tier when demand for a movie decreases.
B. Store newer movie video files in S3 Standard. Store older movie video files in S3 Standard-Infrequent Access (S3 Standard-IA). When a user requests an older movie, recover the video file using standard retrieval.
C. Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible Retrieval. When a user requests an older movie, recover the video file using expedited retrieval.
D. Store newer movie video files in S3 Standard. Store older movie video files in S3 Glacier Flexible Retrieval. When a user requests an older movie, recover the video file using bulk retrieval.
C. Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible Retrieval. When a user requests an older movie, recover the video file using expedited retrieval.
This option uses S3 Intelligent-Tiering for newer movies, automatically optimizing costs based on access patterns. Older movies are stored in S3 Glacier Flexible Retrieval, and accelerated retrieval is used when a user requests an older movie. Accelerated recovery on S3 Glacier typically provides data recovery times in 1-5 minutes, making it suitable for meeting the 5-minute recovery requirement.
775 # A solutions architect needs to design the architecture of an application that a vendor provides as a Docker container image. The container needs 50 GB of available storage for temporary files. The infrastructure must be serverless. Which solution meets these requirements with the LESS operating overhead?
A. Create an AWS Lambda function that uses the Docker container image with a volume mounted on Amazon S3 that has more than 50 GB of space.
B. Create an AWS Lambda function that uses the Docker container image with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type. Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume. Create a service with that task definition.
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the Amazon EC2 launch type with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space. Create a task definition for the container image. Create a service with that task definition.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type. Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume. Create a service with that task definition.
Key here is that it requires 50GB. REMEMBER Lambda supports images up to 10 GB in size. Ephemeral Storage Restrictions
Lambdas have limited temporary storage capacity for the ephemeral directory /tmp. You can increase the default size of 512 MB up to 10 GB https://blog.awsfundamentals.com/lambda-limitations
This option involves using Amazon ECS with Fargate, a serverless computing engine for containers. Using Amazon EFS enables persistent storage across multiple containers and instances. This approach meets the requirement of providing 50GB of storage and is serverless as it uses Fargate.
AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-task-storage.html When provisioned, each Amazon ECS task hosted on AWS Fargate receives the following ephemeral storage (temporary file storage) for bind mounts. This can be mounted and shared among containers using the volumes, mountPoints and volumesFrom parameters in the task definition.
776 # A company needs to use its on-premises LDAP directory service to authenticate its users to the AWS Management Console. The directory service does not support Security Assertion Markup Language (SAML). Which solution meets these requirements?
A. Enable AWS IAM (AWS Single Sign-On) Identity Center between AWS and on-premises LDAP.
B. Create an IAM policy that uses AWS credentials and integrate the policy into LDAP.
C. Configure a process that rotates IAM credentials each time LDAP credentials are updated.
D. Develop an on-premises custom identity broker application or process that uses AWS Security Token Service (AWS STS) to obtain short-lived credentials.
D. Develop an on-premises custom identity broker application or process that uses AWS Security Token Service (AWS STS) to obtain short-lived credentials.
This option involves creating a custom on-premises identity broker application or process that communicates with AWS Security Token Service (STS) to obtain short-lived credentials. This custom solution acts as an intermediary between the on-premises LDAP directory and AWS. Provides a way to obtain temporary security credentials without requiring direct LDAP support. This is a common approach for scenarios where SAML is not an option.
**Option A: Enable AWS IAM (AWS Single Sign-On) Identity Center between AWS and on-premises LDAP. ** - Explanation: AWS Single Sign-On (SSO) is designed to simplify AWS access management for enterprise users and administrators. Supports integration with local directories, but primarily uses SAML for federation. Since the local LDAP directory does not support SAML, option A may not be suitable for the given scenario. **Option B: Create an IAM policy that uses AWS credentials and integrate the policy into LDAP. ** - Explanation: This option suggests creating an IAM policy that uses AWS credentials and integrating it into LDAP. However, AWS IAM policies are typically associated with AWS identities, they are not integrated directly into LDAP. This option does not align with common practices for federated authentication. **Option C: Configure a process that rotates IAM credentials each time LDAP credentials are updated. ** - Explanation: Rotating IAM credentials every time LDAP credentials are updated introduces complexity and operational overhead. Additionally, IAM credentials are typically long-lived, and this approach does not provide the typical single sign-on (SSO) experience that federated authentication solutions offer.
777 # A company stores multiple Amazon Machine Images (AMIs) in an AWS account to launch its Amazon EC2 instances. AMIs contain critical data and configurations that are necessary for business operations. The company wants to implement a solution that recovers accidentally deleted AMIs quickly and efficiently. Which solution will meet these requirements with the LESS operating overhead?
A. Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store snapshots in a separate AWS account.
B. Copy all AMIs to another AWS account periodically.
C. Create a Recycle Bin retention rule.
D. Upload the AMIs to an Amazon S3 bucket that has cross-region replication.
C. Create a Recycle Bin retention rule.
778 # A company has 150TB of archived image data stored on-premises that needs to be moved to the AWS cloud within the next month. The company’s current network connection allows uploads of up to 100 Mbps for this purpose only during the night. What is the MOST cost effective mechanism to move this data and meet the migration deadline?
A. Use AWS Snowmobile to send data to AWS.
B. Order multiple AWS Snowball devices to send data to AWS.
C. Enable Amazon S3 transfer acceleration and upload data securely.
D. Create an Amazon S3 VPC endpoint and establish a VPN to upload the data.
B. Order multiple AWS Snowball devices to send data to AWS.
779 # A company wants to migrate its three-tier application from on-premises to AWS. The web tier and application tier run on third-party virtual machines (VMs). The database tier is running on MySQL. The company needs to migrate the application by making as few architectural changes as possible. The company also needs a database solution that can restore data to a specific point in time. Which solution will meet these requirements with the LESS operating overhead?
A. Migrate the web tier and application tier to Amazon EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL on private subnets.
B. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon Aurora MySQL in private subnets.
C. Migrate the web tier to Amazon EC2 instances on public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL on private subnets.
D. Migrate the web tier and application tier to Amazon EC2 instances on public subnets. Migrate the database tier to Amazon Aurora MySQL on public subnets.
B. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon Aurora MySQL in private subnets.
This option introduces Amazon Aurora MySQL for the database tier, which is a fully managed relational database service compatible with MySQL. Aurora supports Timely recovery. While it adds a managed service, it also requires changes to the database technology, which can introduce some operational considerations.
Aurora provides automated backup and point-in-time recovery, simplifying backup management and data protection. Continuous incremental backups are taken automatically and stored in Amazon S3, and data retention periods can be specified to meet compliance requirements.
NOTE: **Option A: Migrate the web tier and application tier to Amazon EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL on private subnets. ** - Explanation: This option migrates the web and application tier to EC2 instances and the database tier to Amazon RDS for MySQL. RDS for MySQL provides point-in-time recovery capabilities, allowing you to restore the database to a specific point in time. This option minimizes architectural changes and operational overhead while using managed services for the database.
780 # A development team is collaborating with another company to create an integrated product. The other company needs to access an Amazon Simple Queue Service (Amazon SQS) queue that is contained in the development team’s account. The other company wants to poll the queue without giving up its own account permissions to do so. How should a solutions architect provide access to the SQS queue?
A. Create an instance profile that provides the other company with access to the SQS queue.
B. Create an IAM policy that gives the other company access to the SQS queue.
C. Create an SQS access policy that provides the other company with access to the SQS queue.
D. Create an Amazon Simple Notification Service (Amazon SNS) access policy that provides the other company with access to the SQS queue.
C. Create an SQS access policy that provides the other company with access to the SQS queue.
SQS access policies are specifically designed to control access to SQS resources. You can create an SQS access policy that allows the other company’s AWS account Or specific identities to access the SQS queue. This is a suitable option for sharing access to an SQS queue across all accounts.
Summary: - Option B (Create an IAM policy) and Option C (Create an SQS access policy) are valid and common approaches to granting cross-account access to an SQS queue . Choosing between them may depend on factors such as whether you want to manage access through IAM or directly through SQS policies. Both options allow you to grant fine-grained permissions for the other company to poll the SQS queue without exposing broader permissions in your AWS account.
781 # A company’s developers want a secure way to gain SSH access to the company’s Amazon EC2 instances running the latest version of Amazon Linux. Developers work remotely and in the corporate office. The company wants to use AWS services as part of the solution. EC2 instances are hosted in a private VPC subnet and access the Internet through a NAT gateway that is deployed on a public subnet. What should a solutions architect do to meet these requirements in the most cost-effective way?
A. Create a bastion host on the same subnet as the EC2 instances. Grant the ec2:CreateVpnConnection IAM permission to developers. Install EC2 Instance Connect so that developers can connect to EC2 instances.
B. Create an AWS Site-to-Site VPN connection between the corporate network and the VPC. Instruct developers to use the site-to-site VPN connection to access EC2 instances when the developers are on the corporate network. Instruct developers to set up another VPN connection to access when working remotely.
C. Create a bastion host on the VP public subnet. Configure the bastion host’s security groups and SSH keys to only allow SSH connections and authentication from developers’ remote and corporate networks. Instruct developers to connect through the bastion host using SSH to reach the EC2 instances.
D. Attach the AmazonSSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2 instances. Instruct developers to use AWS Systems Manager Session Manager to access EC2 instances.
D. Attach the AmazonSSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2 instances. Instruct developers to use AWS Systems Manager Session Manager to access EC2 instances.
This option involves using AWS Systems Manager Session Manager, which provides a secure and auditable way to access EC2 instances. Eliminates the need for a bastion host and allows access directly through the AWS Management Console. This can be a cost-effective and efficient solution.
Summary: - Option A (Create a bastion host with EC2 Instance Connect), Option Create a bastion host in the public subnet) and Option D (Use AWS Systems Manager Session Manager) are all viable options for secure SSH access. - Option D (AWS Systems Manager Session Manager) is often considered a cost-effective and secure solution without the need for a separate bastion host. Simplifies access and provides audit trails.
- Option A and Option C involve bastion hosts, but have different implementation details. Option A focuses on EC2 Instance Connect, while Option C uses a traditional bastion host with restricted access. Conclusion: - Option D (AWS Systems Manager Session Manager) is probably the most cost-effective and operationally efficient solution for secure SSH access to EC2 instances on a private subnet. It aligns with AWS best practices and simplifies management without the need for a separate bastion host.
782 # A pharmaceutical company is developing a new medicine. The volume of data that the company generates has grown exponentially in recent months. The company’s researchers regularly require that a subset of the entire data set be made available immediately with minimal delay. However, it is not necessary to access the entire data set daily. All data currently resides on local storage arrays, and the company wants to reduce ongoing capital expenditures. Which storage solution should a solutions architect recommend to meet these requirements?
A. Run AWS DataSync as a scheduled cron job to migrate data to an Amazon S3 bucket continuously.
B. Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as target storage. Migrate data to the Storage Gateway appliance.
C. Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as target storage. Migrate data to the Storage Gateway appliance.
D. Configure an AWS site-to-site VPN connection from the on-premises environment to AWS. Migrate data to an Amazon Elastic File System (Amazon EFS) file system.
C. Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as target storage. Migrate data to the Storage Gateway appliance.
This option involves using Storage Gateway with cached volumes, storing frequently accessed data locally for low-latency access, and asynchronously backing up the entire data set to Amazon S3.
- For the specific requirement of having a subset of the data set immediately available with minimal delay, Option C (Storage Gate Volume Gateway with Cached Volumes) appears to be well aligned. Supports low-latency access to frequently accessed data stored on-premises, while ensuring durability of the overall data set in Amazon S3.
783 # A company has a business-critical application running on Amazon EC2 instances. The application stores data in an Amazon DynamoDB table. The company must be able to revert the table to any point within the last 24 hours. Which solution meets these requirements with the LESS operating overhead?
A. Configure point-in-time recovery for the table.
B. Use AWS Backup for the table.
C. Use an AWS Lambda function to make an on-demand backup of the table every hour.
D. Turn on streams on the table to capture a log of all changes to the table in the last 24 hours. Store a copy of the stream in an Amazon S3 bucket.
A. Configure point-in-time recovery for the table.
784 # A company hosts an application that is used to upload files to an Amazon S3 bucket. Once uploaded, files are processed to extract metadata, which takes less than 5 seconds. The volume and frequency of uploads varies from a few files every hour to hundreds of simultaneous uploads. The company has asked a solutions architect to design a cost-effective architecture that meets these requirements. What should the solutions architect recommend?
A. Configure AWS CloudTrail trails to record S3 API calls. Use AWS AppSync to process the files.
B. Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the files.
C. Configure Amazon Kinesis data streams to process and send data to Amazon S3. Invokes an AWS Lambda function to process the files.
D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to process files uploaded to Amazon S3. Invokes an AWS Lambda function to process the files.
B. Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the files.
- This option leverages Amazon S3 event notifications to trigger an AWS Lambda function when an object (file) is created in the S3 bucket.
- AWS Lambda provides a serverless computing service, enabling code execution without the need to provision or manage servers.
- Lambda can be programmed to process the files, extract metadata and perform any other necessary tasks.
- Lambda can automatically scale based on the number of incoming events, making it suitable for variable uploads, from a few files per hour to hundreds of simultaneous uploads.
785 # An enterprise application is deployed on Amazon EC2 instances and uses AWS Lambda functions for an event-driven architecture. The company uses non-production development environments in a different AWS account to test new features before the company deploys the features to production. Production instances show constant usage due to clients in different time zones. The company uses non-production instances only during business hours Monday through Friday. The company does not use non-production instances on weekends. The company wants to optimize costs for running its application on AWS. Which solution will meet these requirements in the MOST cost-effective way?
A. Use on-demand instances for production instances. Use dedicated hosts for non-production instances only on weekends.
B. Use reserved instances for production and non-production instances. Shut down non-production instances when they are not in use.
C. Use compute savings plans for production instances. Use on-demand instances for non-production instances. Shut down non-production instances when they are not in use.
D. Use dedicated hosts for production instances. Use EC2 instance savings plans for non-production instances.
C. Use compute savings plans for production instances. Use on-demand instances for non-production instances. Shut down non-production instances when they are not in use.
- Compute Savings Plans provide significant cost savings for a commitment to a constant amount of compute usage (measured in $/hr) over a 1 or 3 year term. This is suitable for production instances that show constant usage.
- Using on-demand instances for non-production instances allows for flexibility without compromise, and shutting down non-production instances when they are not in use helps minimize costs.
- This approach takes advantage of the cost-effectiveness of savings plans for predictable workloads and the flexibility of on-demand instances for sporadic use.
786 # A company stores data in an on-premises Oracle relational database. The company needs the data to be available in Amazon Aurora PostgreSQL for analysis. The company uses an AWS site-to-site VPN connection to connect its on-premises network to AWS. The company must capture changes that occur to the source database during migration to Aurora PostgreSQL. What solution will meet these requirements?
A. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to an Aurora PostgreSQL schema. Use the AWS Database Migration Service (AWS DMS) full load migration task to migrate the data.
B. Use AWS DataSync to migrate data to an Amazon S3 bucket. Import data from S3 to Aurora PostgreSQL using the Aurora PostgreSQL aws_s3 extension.
C. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to an Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate existing data and replicate ongoing changes.
D. Use an AWS Snowball device to migrate data to an Amazon S3 bucket. Import data from S3 to Aurora PostgreSQL using the Aurora PostgreSQL aws_s3 extension.
C. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to an Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate existing data and replicate ongoing changes.
- AWS Schema Conversion Tool (AWS SCT): This tool helps convert the source database schema to a format compatible with the target database. In this case, it will help to convert the Oracle schema to an Aurora PostgreSQL schema. - AWS Database Migration Service (AWS DMS):
- Full Load Migration: Can be used initially to migrate existing data from on-premises Oracle database to Aurora PostgreSQL.
- Ongoing Change Replication: AWS DMS can be configured for continuous replication, capturing changes to the source database and applying them to the target Aurora PostgreSQL database. This ensures that changes made to the Oracle database during the migration process are also reflected in Aurora PostgreSQL.
787 # A company built an application with Docker containers and needs to run the application in the AWS cloud. The company wants to use a managed service to host the application. The solution must scale appropriately according to the demand for individual container services. The solution should also not result in additional operational overhead or infrastructure to manage. What solutions will meet these requirements? (Choose two.)
A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
B. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate.
C. Provision an Amazon API Gateway API. Connect the API to AWS Lambda to run the containers.
D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes.
A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
C. Provision an Amazon API Gateway API. Connect the API to AWS Lambda to run the containers.
- AWS Fargate is a serverless compute engine for containers, eliminating the need to manage the underlying EC2 instances.
- Automatically scales to meet application demand without manual intervention.
- Abstracts infrastructure management, providing a serverless experience for containerized applications.
788 # An e-commerce company is running a seasonal online sale. The company hosts its website on Amazon EC2 instances that span multiple availability zones. The company wants its website to handle traffic surges during the sale. Which solution will meet these requirements in the MOST cost-effective way?
A. Create an auto-scaling group that is large enough to handle the maximum traffic load. Stop half of your Amazon EC2 instances. Configure the auto-scaling group to use stopped instances to scale when traffic increases.
B. Create an Auto Scaling group for the website. Set the minimum auto-scaling group size so that it can handle large volumes of traffic without needing to scale.
C. Use Amazon CloudFront and Amazon ElastiCache to cache dynamic content with an auto-scaling group set as the origin. Configure the Auto Scaling group with the instances necessary to populate CloudFront and ElastiCache. Scales after the cache is completely full.
D. Configure an auto-scaling group to scale as traffic increases. Create a launch template to start new instances from a preconfigured Amazon Machine Image (AMI).
D. Configure an auto-scaling group to scale as traffic increases. Create a launch template to start new instances from a preconfigured Amazon Machine Image (AMI).
Provides elasticity, automatically scaling to handle increased traffic. The launch template allows for consistent instance configuration.
In summary, while each option has its merits, Option D, with its focus on dynamic scaling using auto-scaling and a launch template, is often preferred for its balance of cost-effectiveness and responsiveness to different traffic patterns. . Aligns with best practices for scaling web applications on AWS.
789 # A solutions architect must provide an automated solution for an enterprise’s compliance policy that states that security groups cannot include a rule that allows SSH starting at 0.0.0.0/0. It is necessary to notify the company if there is any violation in the policy.
A solution is needed as soon as possible. What should the solutions architect do to meet these requirements with the least operational overhead?
A. Write an AWS Lambda script that monitors security groups so that SSH is open to 0.0.0.0/0 addresses and creates a notification whenever it finds one.
B. Enable the restricted ssh AWS Config managed rule and generate an Amazon Simple Notification Service (Amazon SNS) notification when a non-compliant rule is created.
C. Create an IAM role with permissions to globally open security groups and network ACLs. Create an Amazon Simple Notification Service (Amazon SNS) topic to generate a notification each time a user assumes the role.
D. Configure a service control policy (SCP) that prevents non-administrative users from creating or editing security groups. Create a notification in the ticket system when a user requests a rule that requires administrator permissions.
B. Enable the restricted ssh AWS Config managed rule and generate an Amazon Simple Notification Service (Amazon SNS) notification when a non-compliant rule is created.
Takes advantage of the AWS Config managed rule, minimizing manual scripting. Config provides automated compliance checks.
790 # Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes. A company has deployed an application to an AWS account. The application consists of microservices running on AWS Lambda and Amazon Elastic Kubernetes Service (Amazon EKS). A separate team supports each microservice. The company has multiple AWS accounts and wants to give each team their own account for their microservices. A solutions architect needs to design a solution that provides service-to-service communication over HTTPS (port 443). The solution must also provide a service registry for service discovery. Which solution will meet these requirements with the LEAST administrative overhead?
A. Create an inspection VPC. Deploy an AWS Network Firewall firewall in the inspection VPC. Attach the inspection VPC to a new transit gateway. Routes VPC-to-VPC traffic to the inspection VPC. Apply firewall rules to allow only HTTPS communication.
B. Create a VPC Lattice service network. Associate the microservices with the service network. Define HTTPS listeners for each service. Register microservices computing resources as targets. Identify VPCs that need to communicate with the services. Associate those VPCs with the service network.
C. Create a network load balancer (NLB) with an HTTPS listener and target groups for each microservice. Create an AWS PrivateLink endpoint service for each microservice. Create a VPC interface endpoint in each VPC that needs to consume that microservice.
D. Create peering connections between VPCs that contain microservices. Create a list of prefixes for each service that requires a connection to a client. Create route tables to route traffic to the appropriate VPC. Create security groups to allow only HTTPS communication.
B. Create a VPC Lattice service network. Associate the microservices with the service network. Define HTTPS listeners for each service. Register microservices computing resources as targets. Identify VPCs that need to communicate with the services. Associate those VPCs with the service network.
- Uses a network of services for association and communication.
- Specific HTTPS listeners and targets for each service.
Taking into account the limitations and the need for the least administrative overhead, option B provides a decentralized approach with a network of services. While it may involve some initial configuration, it allows for specific association and communication between microservices.
791 # A company has a mobile game that reads most of its metadata from an Amazon RDS DB instance. As the game increased in popularity, developers noticed slowdowns related to game metadata loading times. Performance metrics indicate that simply scaling the database will not help. A solutions architect should explore all options including capabilities for snapshots, replication, and sub-millisecond response times. What should the solutions architect recommend to solve these problems?
A. Migrate the database to Amazon Aurora with Aurora Replicas.
B. Migrate the database to Amazon DynamoDB with global tables.
C. Add an Amazon ElastiCache for Redis layer in front of the database.
D. Add an Amazon ElastiCache layer for Memcached in front of the database.
B. Migrate the database to Amazon DynamoDB with global tables.
- DynamoDB is designed for low latency access and can provide sub-millisecond response times.
- Global tables offer multi-region replication for high availability.
- DynamoDB’s architecture and features are well suited for scenarios with strict performance expectations.
Other Considerations:
A. Migrate the database to Amazon Aurora with Aurora Replicas:
- Aurora is known for its high performance, but sub-millisecond response times may not be guaranteed in all scenarios.
- Aurora replicas provide read scalability, but may not meet the submillisecond requirement.
C. Add an Amazon ElastiCache for Redis layer in front of the database: - ElastiCache for Redis is an in-memory caching solution. - While it may improve read performance, it may not guarantee sub-millisecond response times for all use cases.
D. Add a layer of Amazon ElastiCache for Memcached in front of the database: - Similar to Redis, ElastiCache for Memcached is a caching solution. - Caching may improve read performance, but may not guarantee sub-millisecond response times.
792 # A company uses AWS Organizations for its multi-account AWS setup. The enterprise security organizational unit (OU) needs to share approved Amazon Machine Images (AMIs) with the development OU. AMIs are created by using encrypted AWS Key Management Service (AWS KMS) snapshots. What solution will meet these requirements? (Choose two.)
A. Add the development team’s OU Amazon Resource Name (ARN) to the list of release permissions for AMIs.
B. Add the organizations root Amazon Resource Name (ARN) to the launch permissions list for AMIs.
C. Update the key policy to allow the development team’s OU to use the AWS KMS keys that are used to decrypt snapshots.
D. Add the Amazon Resource Name (ARN) development team account to the list of launch permissions for AMIs.
E. Recreate the AWS KMS key. Add a key policy to allow the root of Amazon Resource Name (ARN) organizations to use the AWS KMS key.
A. Add the development team’s OU Amazon Resource Name (ARN) to the list of release permissions for AMIs.
C. Update the key policy to allow the development team’s OU to use the AWS KMS keys that are used to decrypt snapshots.
- Option A: - Add the Amazon Resource Name (ARN) of the development team’s OU to the launch permissions list for AMIs:
- Explanation: This option is relevant to control who can start AMIs. By adding the development team’s OU to release permissions, you give them the ability to use AMIs.
- Fits the requirement: Share AMI.
Option C:
- Update key policy to allow the development team OU to use AWS KMS keys that are used to decrypt snapshots:
- Explanation: This option addresses decryption permissions. If you want your development team’s OU to use AWS KMS keys to decrypt snapshots (required to launch AMIs), adjusting the key policy is the right approach.
- Fits the requirement: Share encrypted snapshots.
793 # A data analysis company has 80 offices that are distributed worldwide. Each office hosts 1 PB of data and has between 1 and 2 Gbps of Internet bandwidth. The company needs to perform a one-time migration of a large amount of data from its offices to Amazon S3. The company must complete the migration within 4 weeks. Which solution will meet these requirements in the MOST cost-effective way?
A. Establish a new 10 Gbps AWS Direct Connect connection to each office. Transfer the data to Amazon S3.
B. Use multiple AWS Snowball Edge storage-optimized devices to store and transfer data to Amazon S3.
C. Use an AWS snowmobile to store and transfer the data to Amazon S3.
D. Configure an AWS Storage Gateway Volume Gateway to transfer data to Amazon S3.
B. Use multiple AWS Snowball Edge storage-optimized devices to store and transfer data to Amazon S3.
- Considerations: This option can be cost-effective and efficient, especially when dealing with large data sets. Take advantage of physical transportation, reducing the impact on Internet bandwidth.
Other Options:
- Option C: Use an AWS Snowmobile:
- Explanation: AWS Snowmobile is a high-capacity data transfer service that involves a secure shipping container. It is designed for massive data migrations.
- Considerations: While Snowmobile is efficient for extremely large data volumes, it could be overkill for the described scenario of 80 offices with 1 PB of data each.
794 # A company has an Amazon Elastic File System (Amazon EFS) file system that contains a set of reference data. The company has applications on Amazon EC2 instances that need to read the data set. However, applications should not be able to change the data set. The company wants to use IAM access control to prevent applications from modifying or deleting the data set. What solution will meet these requirements?
A. Mount the EFS file system in read-only mode from within the EC2 instances.
B. Create a resource policy for the EFS file system that denies the elasticfilesystem:ClientWrite action to IAM roles that are attached to EC2 instances.
C. Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the EFS file system.
D. Create an EFS access point for each application. Use Portable Operating System Interface (POSIX) file permissions to allow read-only access to files in the root directory.
C. Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the EFS file system.
- Option C: Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the EFS file system:
- This option is also aligned with IAM access control, denying actions using identity policies.
- Option C is a valid option to control modifications through IAM
Other Options:
- Option B: Create a resource policy for the EFS file system that denies the elasticfilesystem:ClientWrite action to IAM roles that are associated with EC2 instances:
- This option involves IAM roles and policies of resources, aligning with the IAM access control requirement.
- Option B is a valid option to use IAM access control to prevent modifications.
795 # A company has hired a third-party vendor to perform work on the company’s AWS account. The provider uses an automated tool that is hosted in an AWS account that the provider owns. The provider does not have IAM access to the company’s AWS account. The company must grant the provider access to the company’s AWS account. Which solution will MOST securely meet these requirements?
A. Create an IAM role in the company account to delegate access to the provider IAM role. Attach the appropriate IAM policies to the role for the permissions the provider requires.
B. Create an IAM user in the company account with a password that meets the password complexity requirements. Attaches the appropriate IAM policies to the user for the permissions the provider requires.
C. Create an IAM group in the company account. Adds the automated tool IAM user from the provider account to the group. Attach the appropriate IAM policies to the group for the permissions that the provider requires.
D. Create an IAM user in the company account that has a permission limit that the provider account allows. Attaches the appropriate IAM policies to the user for the permissions the provider requires.
A. Create an IAM role in the company account to delegate access to the provider IAM role. Attach the appropriate IAM policies to the role for the permissions the provider requires.
- Explanation: This option involves creating a cross-account IAM role to delegate access to the provider IAM role. The role will have policies attached for the required permissions.
- Security: This is a secure approach as it follows the principle of least privilege and uses cross-account roles for access.
Other Options:
- Option B: Create an IAM user in the company account with a password that meets the password complexity requirements:
- Explanation: This option involves creating a local IAM user in the account of the company with the attached policies for the required permits.
- Security: Using a local IAM user with a password could introduce security risks, and it is generally recommended to use temporary roles and credentials instead.
- Option C: Create an IAM group in the company account. Add the automated tool IAM user from the provider account to the group:
- Explanation: This option involves grouping the provider IAM user into the enterprise IAM group and attaching policies to the group for permissions.
- Security: While IAM groups are a good practice, directly adding external IAM users (from another account) to a group in the company account is less secure and may not be a best practice.
- Option D: Create an IAM user in the company account that has a permission limit that allows the provider account
- Explanation: This option involves creating an IAM User with a permission limit that allows the provider account. Policies are attached to the user to obtain the required permissions.
- Security: This approach uses permissions limits for control, but directly creating the IAM user might not be as secure as using roles.
796 # A company wants to run its experimental workloads in the AWS cloud. The company has a budget for cloud spending. The company’s CFO is concerned about the responsibility of each department’s cloud spending. The CFO wants to be notified when the spending threshold reaches 60% of the budget. What solution will meet these requirements?
A. Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets. Add an alert threshold to receive notification when spending exceeds 60% of the budget.
B. Use AWS Cost Explorer forecasts to determine resource owners. Use AWS Cost Anomaly Detection to create alert threshold notifications when spending exceeds 60% of budget.
C. Use cost allocation tags on AWS resources to tag owners. Use the AWS Support API in AWS Trusted Advisor to create alert threshold notifications when spending exceeds 60% of budget.
D. Use AWS Cost Explorer forecasts to determine resource owners. Create usage budgets in AWS Budgets. Add an alert threshold to be notified when spending exceeds 60% of budget.
A. Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets. Add an alert threshold to receive notification when spending exceeds 60% of the budget.
- Explanation: This option suggests using cost allocation tags to tag owners, create usage budgets in AWS budgets, and set an alert threshold for notification when spending exceeds 60% of budget.
- Pros: Uses cost allocation tags to identify resource owners, and AWS budgets are designed specifically for budgeting and cost tracking.