Cert Prep: Certified Solutions Architect - Associate for AWS (SAA-C03) Flashcards
As the new Security Engineer for your company’s AWS cloud environment, you are responsible for developing best practice guidelines. In addition to data security such as encryption, you need to develop a plan for Security Groups, Access Control Lists, as well as IAM Policies. You want to roll out best practice policies for IAM.
Which choice belowis notan IAM best practice?
A. Share access keys for cross-account access.
B. Use policy conditions for extra security.
C. Delegate by using roles instead of sharing credentials.
D. Rotate credentials regularly.
Which of the following data transfer solutions are free? Select three choices.
A. Data transfer between EC2, RDS, and Redshift in the same Availability Zone
B. Data transferred into and out of Elastic Load Balancers using private IP addresses
C. Data transferred into and out from an IPv6 address in a different VPC
D. Data transfer directly between S3, Glacier, DynamoDB, and EC2 in the same AWS Region
You are designing a web application that needs to be highly available and handle a large amountof read traffic. You aredesigning an RDS database with a Multi-AZ configuration that will store transaction data includingpersonal customer data.
You are considering using options to help offset some of the read traffic, andyour client wants to discuss multiple options outside of Amazon RDS features. What other Amazon serviceswould bestoffload the read traffic workload from the application’s database without requiring extensiveapp design changes?
A. Migrate static, WORM data to public Amazon S3 buckets.
B. Implement anElasticache instance to cache frequently-accessed data.
C. Configure an SQS queue to manage read requests for frequently-accessed data.
D. Promote the Multi-AZ standby databaseto a read replica during peak hours.
B. Implement anElasticache instance to cache frequently-accessed data.
Explanation:
Amazon ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud. Amazon ElastiCache improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory system, instead of relying entirely on slower disk-based databases. Amazon ElastiCache is ideally suited as a front-end for Amazon Web Services like Amazon RDS and Amazon DynamoDB, providing extremely low latency for high-performance applications and offloading some of the request workload while these services provide long-lasting data durability.
Which would be the most efficient way to review and reduce networking costs by deleting idle load balancers?
A. Use the Amazon Trusted Advisor Idle Load balancers check to get a report of load balancers with a RequestCount of less than 100 in the last week. Send this report to S3 to find the load balancers to delete.
B. Use the AWS SDK to create a Lambda function to find and delete load balancers with RequestCount <10 in the last week.
C. Use the AWS Management Console to query and delete load balancers on each appropriate EC2 instance with RequestCount < 100 in the last week.
D. Use the Amazon Inspector Idle Load balancers check to get a report of load balancers with a RequestCount of less than 100 in the last week. Send this report to S3 to find the load balancers to delete.
A company storesits applicationdata onsite using iSCSI connectionstoarchival disk storagelocated at an on-premises data center.Now managementwants to identifycloud solutions to back up that data to the AWS cloud and store it at minimal expense.
The company needs to back up200 TB of data to the cloud over the course of amonth, and the speed of the backup is not a critical factor. The backups are rarely accessed, but whenrequested, they should be available in less than 6hours.
What are the most cost-effective steps to archiving the data and meeting additional requirements?
A. 1) Copy the data to AWS Storage Gateway file gateways.
2) Store the data in Amazon S3 in the S3 Glacier Flexible Retrieval storage class.
B. 1) Copy the data to Amazon S3 usingAWS DataSync.
2)Store the data in Amazon S3 in the S3 Glacier Deep Archive storage class.
C. 1) Backup the data using AWS Storage Gateway volumegateways.
2) Store the data in Amazon S3 in the S3 Glacier Flexible Retrieval storage class.
D. 1) Migrate the data to AWS using an AWS Snowball Edge device.
2) Store the data in Amazon S3 in the S3 Glacier Deep Archive storage class.
Your latest client contacted you a week before an audit on its AWS cloud infrastructure. Your client is concerned about its lack of automated policy enforcement for data protection and the difficulties they encounter when reporting for audit and compliance.
Which service should you enable to assist this client?
A. AWS Macie
B. AWS DataSync
C. AWS GuardDuty
D. AWS Backup
D. AWS Backup
Explanation:
The client is in search of a solution that automates policy enforcement for data protection and compliance. With AWS Backup the client can enable automated data protection policies and schedules that will meet the regulatory compliance requirements for its upcoming audit. Also, AWS Backup allows you to centrally manage and automate the backup of data across AWS services such as Ec2, S3, EBS, RDS, EFS,FSx, and more.
The remaining choices are incorrect for the following reasons:
AWS DataSync is a data transfer service that enables you to optimize network bandwidth and accelerate data transfer between on-premises storage and AWS storage. DataSync does not provide policy enforcement for data protection.
Amazon GuardDuty is a threat detection service that continuously monitors AWS accounts and workloads for malicious activity and anomalous behavior.
Amazon FSx provides a cost-effective file storage service that makes it easy to launch, run, and scale high-performance file systems in the cloud. It does not offer the data protection needed in this scenario.
Although AWS Macie protects your data through discovery and protection of your sensitive data at scale, Macie does not provide automated data protection, compliance, and governance for your applications running in the cloud.
An environmental agency is concluding a 10-year study of mining sites and needs to transfer over 200 terabytes of data to the AWS cloud for storage and analysis.
Data will be gradually collected over a period of weeks in an area with no existing network bandwidth. Given the remote location, the agency wants a transfer solution that is cost-effective while requiring minimal device shipments back and forth.
Which AWS solution will best address the agency’s data storage and migration requirements?
A. AWS Snowcone
B. AWS Snowmobile
C. AWS Snowball Compute Optimized with GPU
D. AWS Snowball Storage Optimized
You are rapidly configuringa VPC for a new insurance application that needs to go live imminently to meet an upcoming compliance deadline. The insurance company must migrate a new application tothis newVPC and connect it as quickly as possible to an on-premises, company-ownedapplication thatcannot migrateto the cloud.
Your immediate goal is to connect your on-premises appas quickly as possible but speed and reliability are critical long-term requirements. The medical insurance company suggestsimplementingthe quickest connection method now, and if necessary,switching over to a faster, more reliable connection servicewithinthe next six months if necessary.
Which strategywould work best to satisfy theirshort and long-term networking requirements?
A. AWS VPN is the best short-term and long-term solution.
B. AWS VPN is the best short-term solution, and AWS Direct Connect is the best long-term solution.
C. VPC Endpoints are the best short-term and long-term solutions.
D. VPC Endpoints are the best short-term solution, and AWS VPN is the best long-term solution.
A pharmaceutical company is building an application that will use both AWS and on-premises resources. The application must comply with regulatory requirements and ensure the protection of intellectual property. One of the essential requirements is that data transferred between AWS and on-premises resources should not flow through the public internet. The company currently manages a single VPC with two private subnets in two different availability zones.
Which solution would enable connectivity between AWS and on-premises resources while maintaining a private connection?
A. Use a virtual private gateway with a customer gateway and create a site-to-site VPN connection.
B. Use AWS Transit Gateway to create a private site-to-site VPN connection.
C. Use AWS Direct Connect withavirtual private gateway and a private virtual interface (private VIF).
D. Use AWS VPN CloudHub to create a private site-to-site VPN connection.
C. Use AWS Direct Connect withavirtual private gateway and a private virtual interface (private VIF).
Explanation:
Several AWS services are available to help organizations connect AWS cloud resources with their on-premises infrastructure. Using either AWS Direct Connect and a Virtual Private Gateway with a site-to-site VPN connection are standard solutions to help accomplish this goal. However, the key to this question is that the team is looking for a solution where the data transferred between AWS and on-premises resources should not flow through the public internet. BecauseAWS Direct Connect uses a dedicated network connection and does not use the public internet to connect AWS resources to an on-premises network, this is the correct choice.Using a virtual private gateway with a customer gateway tocreate a site-to-site VPN connection would work, but it uses existing internet connections.
Now, let’s look at the other services mentioned in theremaining choices:
Though theTransit Gateway service can help connect multiple VPCs togetherwithan on-premises network, it alone will not establish a private connection as described in this scenario. A transit gateway can be used with either AWS Direct Connect or a virtual private gateway to connect VPCs with an on-premises network. AWS VPN CloudHub is a service that solutions architects can use with a virtual private gateway to connect multiple customer networks located at different locations. With the virtual private gateway,theremote sites can communicate with each other and thecustomer's Amazon VPCs.
For more information on options for connecting customer networks toAmazonVPCs, take a look at theAmazon Virtual Private Cloud Connectivity Options whitepaper.
You host twoseparate applications that utilize the same DynamoDB tables containing environmental data.The firstapplication, which focuses ondata analysis,is hosted on compute-optimized EC2 instances in a private subnet.It retrieves raw data, processes the data, and uploads the results to a second DynamoDB table. The secondapplication is apublic website hosted on general-purpose EC2 instances within a public subnet and allows researchers to view the raw and processed data online.
For security reasons, you want both applications to access the relevant DynamoDB tables within your VPC rather than sending requests over the internet. You also want to ensure that while your data analysis application can retrieve and upload data to DynamoDB, outside researchers will not be able to upload data or modify any data through the public website.
How can you ensure each application is granted the correct level of authorization? (Choose 2 answers)
A. Deploy a DynamoDB VPC endpoint in the data analysis application’s private subnet, and a DynamoDB VPC endpoint in the public website’s public subnet.
B. Deploy one DynamoDB VPC endpoint in its own subnet. Update the route tables for each application’s subnet with routes to the DynamoDB VPC endpoint.
C. Configure and implement a singleVPC endpoint policy to grant access to both applications.
D. Configure and implement separate VPC endpoint policies for each application.
A company is developing a mission-critical API on AWS using a Lambda function that accesses data stored in Amazon DynamoDB. Once it is in production, the API should respond in microseconds. The database configuration needs to handlehigh throughput and be capable of withstanding spikes in CPU consumption.
Which configuration options should the solutions architect chooseto meet these requirements?
A. DynamoDB with auto scaling
B. DynamoDB provisioned capacity
C. DynamoDB with DAX burstable instances
D. DynamoDB on-demand capacity
Your company is concerned with potential poor architectural practices used by your core internal application. After recently migrating to AWS, you hope to take advantage of an AWS service that recommends best practices for specific workloads.
As a Solutions Architect, which of the following services would you recommend for this use case?
A. AWS Trusted Advisor
B. AWS Well-Architected Framework
C. AWS Inspector
D. AWS Well-Architected Tool
A telecommunications company is developing an AWS cloud data bridge solution to process large amounts of data in real-time from millions of IoT devices.The IoT devices communicate with the data bridge using UDP (user datagram protocol).
The company has deployed a fleet of EC2 instances to handle the incoming traffic but needs to choose the rightElastic Load Balancer to distribute traffic between the EC2 instances.
Which Amazon Elastic Load Balancer is the appropriate choice in this scenario?
A. Network Load Balancer
B. Application Load Balancer
C. Gateway Load Balancer
D. Classic Load Balancer
A team of solutions architects designed an eCommerce website. The team is concerned about API calls from malicious IP addresses or anomalous behaviors. They would like an intelligent service to continuously monitor their AWS accounts and workloads and then deploy AWS Lambda functions for remediations.
How would the solutions architects protect this web presence against the threats that they are concerned about?
A. Assess their AWS account and workloads with Amazon CodeGuru
B. Deploy Amazon GuardDuty on their AWS account and workloads.
C. Monitor their AWS account and workloads with Amazon Cognito
D. Enable Amazon Inspector on their AWS account and workloads.
You are designing an AWS cloud environment for a client. There are applications that will not be migrated to the cloud environment so it will be a hybrid solution. You also need to create an EFS file system that both the cloud and hybrid environments need to access. You will use Direct Connect to facilitate the communication between the on-premises servers and the EFS File System. Which statement characterizes how Amazon will charge you for this configuration?
A. You will be charged for AWS Direct Connect and for the data transmitted between the on-premises servers and EFS.
B. This is all covered under the VPC charge so there is no additional charge.
C. You will be charged for AWS Direct Connect there is no additional cost for on-premises access to your Amazon EFS file systems.
D. There is no charge for Direct Connect and a flat fee for EFS.
C. You will be charged for AWS Direct Connect there is no additional cost for on-premises access to your Amazon EFS file systems.
Explanation:
By using an Amazon EFS file system mounted on an on-premises server, you can migrate on-premises data into the AWS Cloud hosted in an Amazon EFS file system. You can also take advantage of bursting, meaning that you can move data from your on-premises servers into Amazon EFS, analyze it on a fleet of Amazon EC2 instances in your Amazon VPC, and then store the results permanently in your file system or move the results back to your on-premises server. There is no additional cost for on-premises access to your Amazon EFS file systems. Note that you’ll be charged for the AWS Direct Connect connection to your Amazon VPC.
You have implemented Amazon S3 multipart uploads to import on-premise files into the AWS cloud.
While the process is running smoothly, there are concerns about network utilization, especially during peak business hours, when multipart uploads require shared network bandwidth.
What is a cost-effective way to minimize network issues caused by S3 multipart uploads?
A. Transmit multipart uploads to AWS using VPC endpoints.
B. Transmit multipart uploads to AWS using AWS Direct Connect.
C. Pause multipart uploads during peak network usage.
D. Compress objects before initiating multipart uploads.
You plan to develop an efficient auto scaling process for EC2 instances. A key to this will be bootstrapping for newly created instances. You want to configure new instances as quickly as possible to get them into service efficiently upon startup. What tasks can bootstrapping perform? (Choose 3 answers)
A. Increase network throughput
B. Enroll an instance into a directory service
C. Install application software
D. Apply patches and OS updates
Your data engineering team has recently migrated its Hadoop infrastructure to AWS. They ask if you are aware of options for higher-speed network connectivity between their instances.
What two enhanced network options can you present to the team? (Choose 2 answers)
A. Elastic Network Adapter (ENA)
B. Dual-stack Network Adapter (DNA)
C. Intel 85299 Virtual Function (VF) interface
D. AMD Opteron Virtual Function (VF) interface
A. Elastic Network Adapter (ENA)
C. Intel 85299 Virtual Function (VF) interface
Explanation:
Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Depending on your instance type, you will either use the Intel 82599 Virtual Function interface or the Elastic Network Adapter.
You are working on a 2 tier application hosted on a cluster of EC2 instances behind an application load balancer. During peak times, the webserver’s auto-scaling group is configured to add additional servers when CPU utilization reaches 70% for the existing servers.
Due to compliance requirements, only approved Amazon machine images can be utilized in the creation of servers for the application and all existing AMIs need to be compliant. You need to determine a way to monitor the EC2 instances for non-compliant amazon machine images and be alerted when a non-compliant image is in use.
Which of the following monitoring solutions would provide the necessary visibility and alerting whenever a non-compliant AMI is in use?
A. Enable AWS Config in the region the application is hosted in. Utilize the AWS-managed rule ‘approved-amis-by-id’ to trigger an alert whenever a non-compliant AMI is in use.
B. Create a CloudWatch Event type ‘EC2 Instance State-change Notification’ in the region the application is hosted in. Create an event rule to trigger an alert whenever a non-compliant AMI is in use.
C. Enable AWS Inspector for the EC2 instances in the auto-scaling group. Utilize the AWS-managed rule ‘Approved CIS hardened AMIs’ to trigger an alert whenever a non-compliant AMI is in use.
D. Enable AWS Shield in the region the application is hosted in. Create a rule to trigger an alert whenever a non-compliant AMI is in use.
A. Enable AWS Config in the region the application is hosted in. Utilize the AWS-managed rule ‘approved-amis-by-id’ to trigger an alert whenever a non-compliant AMI is in use.
Explanation:
AWS Config can assist with security monitoring by alerting you to when resources such as security groups and IAM credentials have had changes to the baseline configurations. AWS Config has a managed rules set and the AWS managed rule ‘approved-amis-by-id’ can check that running instances are using approved Amazon Machine Images, or AMIs. You can specify a list of approved AMIs by ID or provide a tag to specify the list of AMI Ids.
The remaining choices are incorrect for the following reasons:
● AWS Inspector is a tool used primarily for the purpose of checking the network accessibility and security vulnerabilities of your EC2 instances and the security state of the applications running on those instances as opposed to alerting on non-compliant amazon machine images.
● AWS Shield is a managed DDoS service enabled by Amazon to protect applications running on AWS. The rules on AWS Shield are not designed to track and alert for non-compliant amazon machine images.
● Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources. The CloudWatch Event type ‘EC2 Instance State-change Notification’ will log state changes of Amazon EC2 Instances, not non-compliant amazon machine images.
A new, small hotel chain has hired you to optimizean existing small, single-AZ RDS DB instanceto manage reservations for their original location. They recently expanded to new locations and need to optimize their online reservation service.
Incoming requests for reservations could double ortriple their existing database size in a matter of hours, depending on how well their advertising works.With how much capital they invested in new locations, they value the availability of the database far above any cost concerns. During this peak period, the RDS database will need to manage an equal number of reads and writes.
With a limited amount of time to prepare for a potential spike, what is the best single step to ensurethe database remains available to schedule reservations with no loss of service?
A. Enable multi-AZ configuration.
B. Enable Amazon RDS Storage Auto Scaling.
C. Manually modify the DB instance to a larger instance class.
D. Enableread replicas.
C. Manually modify the DB instance to a larger instance class.
Explanation:
First, let’s review the key pieces of information in this question:
They currently use a small RDS instanceto manage reservations... Incoming requests for reservations could double ortriple their existing database size in a matter of hours... the RDS database will need to manage an equal number of reads and writes.
To handle a large number of reads and writes will require scaling vertically. Read replicas are ideal for handling spikes in read requests, but will not effectively manage writes as well.
Multi-AZ configurations are a feature to enable high availability but are not designed to handle increased read or write workloads.
Storage auto scaling could handle storage limitations, but an influx of writes would overwhelm the small instances compute and memory limitations. It is also feasible that auto scaling would not scale fast enough, given how quickly the hotel business expects itsdatabase to double in size. Auto scaling increases database size gradually, and once the storage scales once, it cannot scale again for approximately six hours. (See this link for more information.)
This is why the best choice is to manually modify the instance to a distance DB class.
A Solutions Architect is configuringthe network security fora new three-tier application. The application hasa web tier of EC2 instances in a public subnet, an application tier on EC2 instances in a private subnet, and a largeRDS MySQL database instance in a second private subnet behind an internal load balancer.
The web tier willallow inbound requests using theHTTPS protocol. Theapplication tier should receive requests using the HTTPS protocol, but mustcommunicate with public endpoints on the internet without exposing its public IP addresses.
The RDS database should specifically allow both inbound and outbound traffic requests toport 3306 from the web and application tiers, but explicitlydeny all inbound and outbound traffic over all other protocols and ports.
What stateful network security resourceshould the Solution Architect configure to protect the web tier?
A. Configure an AWS WAF Web ACL.
B. Configure a NAT Gateway placed in the web tier’s public subnet.
C. Deploy a Network Access Control List (NACL) with inbound and outbound rules allowing traffic from a Source/Destination of 0.0.0.0/0.
D. Deploy a security group for the web tier with Port 443 open to 0.0.0.0/0.
D. Deploy a security group for the web tier with Port 443 open to 0.0.0.0/0.
Explanation:
The best solution for the web tier is a security group with Port 443 open to 0.0.0.0/0. A Network ACL or NACL is not stateful, and a NAT Gateway is not necessary as the web tier is within a public subnet.
An application load balancer with HTTPS listeners can offload encryption/decryption for an application, but it does not act as a firewall to protect a resource.
An AWS WAF Web ACL is overkill for this tier - it is stateless, but also has a significant cost compared to a security group.
After weeks of testing, an organization is launching the first publicly available version of its online service, with plans to release version two in six months. They will host a scalable web application on Amazon EC2 instances associated with an auto scaling group behind a Network Load Balancer.
Version 1.0of the application must maintainhigh availability at all times, but version 2.0version willrequire a different instance family to provide optimalperformance for new app features. After extensive market seeding, version 1.0 of the application has built a strong userbase so they expect workloads to be consistent when they launch and steadily grow over time.
Which choice below isa durable and cost-effective solution in this scenario?
A. Use an EC2 Instance Savings Plan
B. Use Standard Reserved Instances
C. Use a Compute Savings Plan
D. Use Spot Instances
An IT department currentlymanages Windows-based file storage for user directories and department file shares.Due to increased costs and resources required to maintain this file storage, the company plans to migrate its files to the cloud anduse Amazon FSx for Windows Server.The team islooking for the appropriate configuration options that will minimize their costs for this storage service.
Which of the following FSx for Windows configuration options are cost-effective choices the team can makein this scenario?(Choose 2 answers)
A. Choose the HDD storage type when creating the file system.
B. Enable data deduplication for the file system.
C. Choose the SSD storage type when creating the file system.
D.Disabledata deduplication for the file system.
A team is deploying AWS resources, including EC2 and RDS database instances, into a VPC’s public subnet after recovering from a system failure. The team attempts to establish connections using HTTPS protocol to these new instances from other subnets within the VPC, and from other peered VPCs within the same region, but receives numerous 500 error messages.
The team needs to quickly identify the cause or causes of the connection problem that prevents connecting to the new subnet.
What AWS solution should they use to identify the cause of the network problem?
A. Amazon Route 53 Application Recovery Controller (ARC)
B. Amazon Route 53 Resolver
C. VPC Reachability Analyzer
D. VPC Network Access Analyzer