Cert Prep: Certified Solutions Architect - Associate for AWS (SAA-C03) Flashcards
As the new Security Engineer for your company’s AWS cloud environment, you are responsible for developing best practice guidelines. In addition to data security such as encryption, you need to develop a plan for Security Groups, Access Control Lists, as well as IAM Policies. You want to roll out best practice policies for IAM.
Which choice belowis notan IAM best practice?
A. Share access keys for cross-account access.
B. Use policy conditions for extra security.
C. Delegate by using roles instead of sharing credentials.
D. Rotate credentials regularly.
Which of the following data transfer solutions are free? Select three choices.
A. Data transfer between EC2, RDS, and Redshift in the same Availability Zone
B. Data transferred into and out of Elastic Load Balancers using private IP addresses
C. Data transferred into and out from an IPv6 address in a different VPC
D. Data transfer directly between S3, Glacier, DynamoDB, and EC2 in the same AWS Region
You are designing a web application that needs to be highly available and handle a large amountof read traffic. You aredesigning an RDS database with a Multi-AZ configuration that will store transaction data includingpersonal customer data.
You are considering using options to help offset some of the read traffic, andyour client wants to discuss multiple options outside of Amazon RDS features. What other Amazon serviceswould bestoffload the read traffic workload from the application’s database without requiring extensiveapp design changes?
A. Migrate static, WORM data to public Amazon S3 buckets.
B. Implement anElasticache instance to cache frequently-accessed data.
C. Configure an SQS queue to manage read requests for frequently-accessed data.
D. Promote the Multi-AZ standby databaseto a read replica during peak hours.
B. Implement anElasticache instance to cache frequently-accessed data.
Explanation:
Amazon ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud. Amazon ElastiCache improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory system, instead of relying entirely on slower disk-based databases. Amazon ElastiCache is ideally suited as a front-end for Amazon Web Services like Amazon RDS and Amazon DynamoDB, providing extremely low latency for high-performance applications and offloading some of the request workload while these services provide long-lasting data durability.
Which would be the most efficient way to review and reduce networking costs by deleting idle load balancers?
A. Use the Amazon Trusted Advisor Idle Load balancers check to get a report of load balancers with a RequestCount of less than 100 in the last week. Send this report to S3 to find the load balancers to delete.
B. Use the AWS SDK to create a Lambda function to find and delete load balancers with RequestCount <10 in the last week.
C. Use the AWS Management Console to query and delete load balancers on each appropriate EC2 instance with RequestCount < 100 in the last week.
D. Use the Amazon Inspector Idle Load balancers check to get a report of load balancers with a RequestCount of less than 100 in the last week. Send this report to S3 to find the load balancers to delete.
A company storesits applicationdata onsite using iSCSI connectionstoarchival disk storagelocated at an on-premises data center.Now managementwants to identifycloud solutions to back up that data to the AWS cloud and store it at minimal expense.
The company needs to back up200 TB of data to the cloud over the course of amonth, and the speed of the backup is not a critical factor. The backups are rarely accessed, but whenrequested, they should be available in less than 6hours.
What are the most cost-effective steps to archiving the data and meeting additional requirements?
A. 1) Copy the data to AWS Storage Gateway file gateways.
2) Store the data in Amazon S3 in the S3 Glacier Flexible Retrieval storage class.
B. 1) Copy the data to Amazon S3 usingAWS DataSync.
2)Store the data in Amazon S3 in the S3 Glacier Deep Archive storage class.
C. 1) Backup the data using AWS Storage Gateway volumegateways.
2) Store the data in Amazon S3 in the S3 Glacier Flexible Retrieval storage class.
D. 1) Migrate the data to AWS using an AWS Snowball Edge device.
2) Store the data in Amazon S3 in the S3 Glacier Deep Archive storage class.
Your latest client contacted you a week before an audit on its AWS cloud infrastructure. Your client is concerned about its lack of automated policy enforcement for data protection and the difficulties they encounter when reporting for audit and compliance.
Which service should you enable to assist this client?
A. AWS Macie
B. AWS DataSync
C. AWS GuardDuty
D. AWS Backup
D. AWS Backup
Explanation:
The client is in search of a solution that automates policy enforcement for data protection and compliance. With AWS Backup the client can enable automated data protection policies and schedules that will meet the regulatory compliance requirements for its upcoming audit. Also, AWS Backup allows you to centrally manage and automate the backup of data across AWS services such as Ec2, S3, EBS, RDS, EFS,FSx, and more.
The remaining choices are incorrect for the following reasons:
AWS DataSync is a data transfer service that enables you to optimize network bandwidth and accelerate data transfer between on-premises storage and AWS storage. DataSync does not provide policy enforcement for data protection.
Amazon GuardDuty is a threat detection service that continuously monitors AWS accounts and workloads for malicious activity and anomalous behavior.
Amazon FSx provides a cost-effective file storage service that makes it easy to launch, run, and scale high-performance file systems in the cloud. It does not offer the data protection needed in this scenario.
Although AWS Macie protects your data through discovery and protection of your sensitive data at scale, Macie does not provide automated data protection, compliance, and governance for your applications running in the cloud.
An environmental agency is concluding a 10-year study of mining sites and needs to transfer over 200 terabytes of data to the AWS cloud for storage and analysis.
Data will be gradually collected over a period of weeks in an area with no existing network bandwidth. Given the remote location, the agency wants a transfer solution that is cost-effective while requiring minimal device shipments back and forth.
Which AWS solution will best address the agency’s data storage and migration requirements?
A. AWS Snowcone
B. AWS Snowmobile
C. AWS Snowball Compute Optimized with GPU
D. AWS Snowball Storage Optimized
You are rapidly configuringa VPC for a new insurance application that needs to go live imminently to meet an upcoming compliance deadline. The insurance company must migrate a new application tothis newVPC and connect it as quickly as possible to an on-premises, company-ownedapplication thatcannot migrateto the cloud.
Your immediate goal is to connect your on-premises appas quickly as possible but speed and reliability are critical long-term requirements. The medical insurance company suggestsimplementingthe quickest connection method now, and if necessary,switching over to a faster, more reliable connection servicewithinthe next six months if necessary.
Which strategywould work best to satisfy theirshort and long-term networking requirements?
A. AWS VPN is the best short-term and long-term solution.
B. AWS VPN is the best short-term solution, and AWS Direct Connect is the best long-term solution.
C. VPC Endpoints are the best short-term and long-term solutions.
D. VPC Endpoints are the best short-term solution, and AWS VPN is the best long-term solution.
A pharmaceutical company is building an application that will use both AWS and on-premises resources. The application must comply with regulatory requirements and ensure the protection of intellectual property. One of the essential requirements is that data transferred between AWS and on-premises resources should not flow through the public internet. The company currently manages a single VPC with two private subnets in two different availability zones.
Which solution would enable connectivity between AWS and on-premises resources while maintaining a private connection?
A. Use a virtual private gateway with a customer gateway and create a site-to-site VPN connection.
B. Use AWS Transit Gateway to create a private site-to-site VPN connection.
C. Use AWS Direct Connect withavirtual private gateway and a private virtual interface (private VIF).
D. Use AWS VPN CloudHub to create a private site-to-site VPN connection.
C. Use AWS Direct Connect withavirtual private gateway and a private virtual interface (private VIF).
Explanation:
Several AWS services are available to help organizations connect AWS cloud resources with their on-premises infrastructure. Using either AWS Direct Connect and a Virtual Private Gateway with a site-to-site VPN connection are standard solutions to help accomplish this goal. However, the key to this question is that the team is looking for a solution where the data transferred between AWS and on-premises resources should not flow through the public internet. BecauseAWS Direct Connect uses a dedicated network connection and does not use the public internet to connect AWS resources to an on-premises network, this is the correct choice.Using a virtual private gateway with a customer gateway tocreate a site-to-site VPN connection would work, but it uses existing internet connections.
Now, let’s look at the other services mentioned in theremaining choices:
Though theTransit Gateway service can help connect multiple VPCs togetherwithan on-premises network, it alone will not establish a private connection as described in this scenario. A transit gateway can be used with either AWS Direct Connect or a virtual private gateway to connect VPCs with an on-premises network. AWS VPN CloudHub is a service that solutions architects can use with a virtual private gateway to connect multiple customer networks located at different locations. With the virtual private gateway,theremote sites can communicate with each other and thecustomer's Amazon VPCs.
For more information on options for connecting customer networks toAmazonVPCs, take a look at theAmazon Virtual Private Cloud Connectivity Options whitepaper.
You host twoseparate applications that utilize the same DynamoDB tables containing environmental data.The firstapplication, which focuses ondata analysis,is hosted on compute-optimized EC2 instances in a private subnet.It retrieves raw data, processes the data, and uploads the results to a second DynamoDB table. The secondapplication is apublic website hosted on general-purpose EC2 instances within a public subnet and allows researchers to view the raw and processed data online.
For security reasons, you want both applications to access the relevant DynamoDB tables within your VPC rather than sending requests over the internet. You also want to ensure that while your data analysis application can retrieve and upload data to DynamoDB, outside researchers will not be able to upload data or modify any data through the public website.
How can you ensure each application is granted the correct level of authorization? (Choose 2 answers)
A. Deploy a DynamoDB VPC endpoint in the data analysis application’s private subnet, and a DynamoDB VPC endpoint in the public website’s public subnet.
B. Deploy one DynamoDB VPC endpoint in its own subnet. Update the route tables for each application’s subnet with routes to the DynamoDB VPC endpoint.
C. Configure and implement a singleVPC endpoint policy to grant access to both applications.
D. Configure and implement separate VPC endpoint policies for each application.
A company is developing a mission-critical API on AWS using a Lambda function that accesses data stored in Amazon DynamoDB. Once it is in production, the API should respond in microseconds. The database configuration needs to handlehigh throughput and be capable of withstanding spikes in CPU consumption.
Which configuration options should the solutions architect chooseto meet these requirements?
A. DynamoDB with auto scaling
B. DynamoDB provisioned capacity
C. DynamoDB with DAX burstable instances
D. DynamoDB on-demand capacity
Your company is concerned with potential poor architectural practices used by your core internal application. After recently migrating to AWS, you hope to take advantage of an AWS service that recommends best practices for specific workloads.
As a Solutions Architect, which of the following services would you recommend for this use case?
A. AWS Trusted Advisor
B. AWS Well-Architected Framework
C. AWS Inspector
D. AWS Well-Architected Tool
A telecommunications company is developing an AWS cloud data bridge solution to process large amounts of data in real-time from millions of IoT devices.The IoT devices communicate with the data bridge using UDP (user datagram protocol).
The company has deployed a fleet of EC2 instances to handle the incoming traffic but needs to choose the rightElastic Load Balancer to distribute traffic between the EC2 instances.
Which Amazon Elastic Load Balancer is the appropriate choice in this scenario?
A. Network Load Balancer
B. Application Load Balancer
C. Gateway Load Balancer
D. Classic Load Balancer
A team of solutions architects designed an eCommerce website. The team is concerned about API calls from malicious IP addresses or anomalous behaviors. They would like an intelligent service to continuously monitor their AWS accounts and workloads and then deploy AWS Lambda functions for remediations.
How would the solutions architects protect this web presence against the threats that they are concerned about?
A. Assess their AWS account and workloads with Amazon CodeGuru
B. Deploy Amazon GuardDuty on their AWS account and workloads.
C. Monitor their AWS account and workloads with Amazon Cognito
D. Enable Amazon Inspector on their AWS account and workloads.
You are designing an AWS cloud environment for a client. There are applications that will not be migrated to the cloud environment so it will be a hybrid solution. You also need to create an EFS file system that both the cloud and hybrid environments need to access. You will use Direct Connect to facilitate the communication between the on-premises servers and the EFS File System. Which statement characterizes how Amazon will charge you for this configuration?
A. You will be charged for AWS Direct Connect and for the data transmitted between the on-premises servers and EFS.
B. This is all covered under the VPC charge so there is no additional charge.
C. You will be charged for AWS Direct Connect there is no additional cost for on-premises access to your Amazon EFS file systems.
D. There is no charge for Direct Connect and a flat fee for EFS.
C. You will be charged for AWS Direct Connect there is no additional cost for on-premises access to your Amazon EFS file systems.
Explanation:
By using an Amazon EFS file system mounted on an on-premises server, you can migrate on-premises data into the AWS Cloud hosted in an Amazon EFS file system. You can also take advantage of bursting, meaning that you can move data from your on-premises servers into Amazon EFS, analyze it on a fleet of Amazon EC2 instances in your Amazon VPC, and then store the results permanently in your file system or move the results back to your on-premises server. There is no additional cost for on-premises access to your Amazon EFS file systems. Note that you’ll be charged for the AWS Direct Connect connection to your Amazon VPC.
You have implemented Amazon S3 multipart uploads to import on-premise files into the AWS cloud.
While the process is running smoothly, there are concerns about network utilization, especially during peak business hours, when multipart uploads require shared network bandwidth.
What is a cost-effective way to minimize network issues caused by S3 multipart uploads?
A. Transmit multipart uploads to AWS using VPC endpoints.
B. Transmit multipart uploads to AWS using AWS Direct Connect.
C. Pause multipart uploads during peak network usage.
D. Compress objects before initiating multipart uploads.
You plan to develop an efficient auto scaling process for EC2 instances. A key to this will be bootstrapping for newly created instances. You want to configure new instances as quickly as possible to get them into service efficiently upon startup. What tasks can bootstrapping perform? (Choose 3 answers)
A. Increase network throughput
B. Enroll an instance into a directory service
C. Install application software
D. Apply patches and OS updates
Your data engineering team has recently migrated its Hadoop infrastructure to AWS. They ask if you are aware of options for higher-speed network connectivity between their instances.
What two enhanced network options can you present to the team? (Choose 2 answers)
A. Elastic Network Adapter (ENA)
B. Dual-stack Network Adapter (DNA)
C. Intel 85299 Virtual Function (VF) interface
D. AMD Opteron Virtual Function (VF) interface
A. Elastic Network Adapter (ENA)
C. Intel 85299 Virtual Function (VF) interface
Explanation:
Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Depending on your instance type, you will either use the Intel 82599 Virtual Function interface or the Elastic Network Adapter.
You are working on a 2 tier application hosted on a cluster of EC2 instances behind an application load balancer. During peak times, the webserver’s auto-scaling group is configured to add additional servers when CPU utilization reaches 70% for the existing servers.
Due to compliance requirements, only approved Amazon machine images can be utilized in the creation of servers for the application and all existing AMIs need to be compliant. You need to determine a way to monitor the EC2 instances for non-compliant amazon machine images and be alerted when a non-compliant image is in use.
Which of the following monitoring solutions would provide the necessary visibility and alerting whenever a non-compliant AMI is in use?
A. Enable AWS Config in the region the application is hosted in. Utilize the AWS-managed rule ‘approved-amis-by-id’ to trigger an alert whenever a non-compliant AMI is in use.
B. Create a CloudWatch Event type ‘EC2 Instance State-change Notification’ in the region the application is hosted in. Create an event rule to trigger an alert whenever a non-compliant AMI is in use.
C. Enable AWS Inspector for the EC2 instances in the auto-scaling group. Utilize the AWS-managed rule ‘Approved CIS hardened AMIs’ to trigger an alert whenever a non-compliant AMI is in use.
D. Enable AWS Shield in the region the application is hosted in. Create a rule to trigger an alert whenever a non-compliant AMI is in use.
A. Enable AWS Config in the region the application is hosted in. Utilize the AWS-managed rule ‘approved-amis-by-id’ to trigger an alert whenever a non-compliant AMI is in use.
Explanation:
AWS Config can assist with security monitoring by alerting you to when resources such as security groups and IAM credentials have had changes to the baseline configurations. AWS Config has a managed rules set and the AWS managed rule ‘approved-amis-by-id’ can check that running instances are using approved Amazon Machine Images, or AMIs. You can specify a list of approved AMIs by ID or provide a tag to specify the list of AMI Ids.
The remaining choices are incorrect for the following reasons:
● AWS Inspector is a tool used primarily for the purpose of checking the network accessibility and security vulnerabilities of your EC2 instances and the security state of the applications running on those instances as opposed to alerting on non-compliant amazon machine images.
● AWS Shield is a managed DDoS service enabled by Amazon to protect applications running on AWS. The rules on AWS Shield are not designed to track and alert for non-compliant amazon machine images.
● Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources. The CloudWatch Event type ‘EC2 Instance State-change Notification’ will log state changes of Amazon EC2 Instances, not non-compliant amazon machine images.
A new, small hotel chain has hired you to optimizean existing small, single-AZ RDS DB instanceto manage reservations for their original location. They recently expanded to new locations and need to optimize their online reservation service.
Incoming requests for reservations could double ortriple their existing database size in a matter of hours, depending on how well their advertising works.With how much capital they invested in new locations, they value the availability of the database far above any cost concerns. During this peak period, the RDS database will need to manage an equal number of reads and writes.
With a limited amount of time to prepare for a potential spike, what is the best single step to ensurethe database remains available to schedule reservations with no loss of service?
A. Enable multi-AZ configuration.
B. Enable Amazon RDS Storage Auto Scaling.
C. Manually modify the DB instance to a larger instance class.
D. Enableread replicas.
C. Manually modify the DB instance to a larger instance class.
Explanation:
First, let’s review the key pieces of information in this question:
They currently use a small RDS instanceto manage reservations... Incoming requests for reservations could double ortriple their existing database size in a matter of hours... the RDS database will need to manage an equal number of reads and writes.
To handle a large number of reads and writes will require scaling vertically. Read replicas are ideal for handling spikes in read requests, but will not effectively manage writes as well.
Multi-AZ configurations are a feature to enable high availability but are not designed to handle increased read or write workloads.
Storage auto scaling could handle storage limitations, but an influx of writes would overwhelm the small instances compute and memory limitations. It is also feasible that auto scaling would not scale fast enough, given how quickly the hotel business expects itsdatabase to double in size. Auto scaling increases database size gradually, and once the storage scales once, it cannot scale again for approximately six hours. (See this link for more information.)
This is why the best choice is to manually modify the instance to a distance DB class.
A Solutions Architect is configuringthe network security fora new three-tier application. The application hasa web tier of EC2 instances in a public subnet, an application tier on EC2 instances in a private subnet, and a largeRDS MySQL database instance in a second private subnet behind an internal load balancer.
The web tier willallow inbound requests using theHTTPS protocol. Theapplication tier should receive requests using the HTTPS protocol, but mustcommunicate with public endpoints on the internet without exposing its public IP addresses.
The RDS database should specifically allow both inbound and outbound traffic requests toport 3306 from the web and application tiers, but explicitlydeny all inbound and outbound traffic over all other protocols and ports.
What stateful network security resourceshould the Solution Architect configure to protect the web tier?
A. Configure an AWS WAF Web ACL.
B. Configure a NAT Gateway placed in the web tier’s public subnet.
C. Deploy a Network Access Control List (NACL) with inbound and outbound rules allowing traffic from a Source/Destination of 0.0.0.0/0.
D. Deploy a security group for the web tier with Port 443 open to 0.0.0.0/0.
D. Deploy a security group for the web tier with Port 443 open to 0.0.0.0/0.
Explanation:
The best solution for the web tier is a security group with Port 443 open to 0.0.0.0/0. A Network ACL or NACL is not stateful, and a NAT Gateway is not necessary as the web tier is within a public subnet.
An application load balancer with HTTPS listeners can offload encryption/decryption for an application, but it does not act as a firewall to protect a resource.
An AWS WAF Web ACL is overkill for this tier - it is stateless, but also has a significant cost compared to a security group.
After weeks of testing, an organization is launching the first publicly available version of its online service, with plans to release version two in six months. They will host a scalable web application on Amazon EC2 instances associated with an auto scaling group behind a Network Load Balancer.
Version 1.0of the application must maintainhigh availability at all times, but version 2.0version willrequire a different instance family to provide optimalperformance for new app features. After extensive market seeding, version 1.0 of the application has built a strong userbase so they expect workloads to be consistent when they launch and steadily grow over time.
Which choice below isa durable and cost-effective solution in this scenario?
A. Use an EC2 Instance Savings Plan
B. Use Standard Reserved Instances
C. Use a Compute Savings Plan
D. Use Spot Instances
An IT department currentlymanages Windows-based file storage for user directories and department file shares.Due to increased costs and resources required to maintain this file storage, the company plans to migrate its files to the cloud anduse Amazon FSx for Windows Server.The team islooking for the appropriate configuration options that will minimize their costs for this storage service.
Which of the following FSx for Windows configuration options are cost-effective choices the team can makein this scenario?(Choose 2 answers)
A. Choose the HDD storage type when creating the file system.
B. Enable data deduplication for the file system.
C. Choose the SSD storage type when creating the file system.
D.Disabledata deduplication for the file system.
A team is deploying AWS resources, including EC2 and RDS database instances, into a VPC’s public subnet after recovering from a system failure. The team attempts to establish connections using HTTPS protocol to these new instances from other subnets within the VPC, and from other peered VPCs within the same region, but receives numerous 500 error messages.
The team needs to quickly identify the cause or causes of the connection problem that prevents connecting to the new subnet.
What AWS solution should they use to identify the cause of the network problem?
A. Amazon Route 53 Application Recovery Controller (ARC)
B. Amazon Route 53 Resolver
C. VPC Reachability Analyzer
D. VPC Network Access Analyzer
You are migrating on-premise legal files to theAWS Cloud in Amazon S3 buckets. The corporate audit team will review all legal files within the next year, but until that review is completed, you need to ensure that the legal files are not updated or deleted in the next 18months.
There are millions of objects contained in the buckets that need review, and you are concerned you will need to spend an excessive amount of time protecting each object.
What steps will ensure the files can be uploaded most efficiently but have the required protection for the specific time period of 18 months? (Choose 2 answers)
A. Set a default retention period of 18 months on all related S3 buckets.
B. Set a retention period of 18 months on all relevant object versionsvia a batch operation.
C. Enable object locks on all relevantS3 buckets with a retention mode of compliance mode.
D. Set a default legal hold on all related S3 buckets
Your company has recently acquired several small start-up techcompanies within the last year. In an effort to consolidate your resources, you are gradually migrating all digital files to your parent company’s AWS accounts, and storing a large number of files within an S3 bucket.
You are uploading millions of files, to save costs, but have not had the opportunity to review many of the files and documents to understand which files will be accessed frequently or infrequently.
What would be the best way to quickly upload the objects to S3and ensure the best storage class from a cost perspective?
A. Upload all files to the Amazon S3 Standard-IA storage class and immediately set up all objects to be processed withStorage Class Analysis.
B. Upload all files to the Amazon S3 Intelligent Tiering storage class and review costs related to the frequency of access over time.
C. Upload all the files to the Amazon S3 Standardstorage class and review costs for access frequency over time.
D. Upload all the files to the Amazon S3 Standard-IA storage class and review costs for access frequency over time.
An IT department manages a content management system (CMS) running on an Amazon EC2 instance mounted to an Elastic File System (EFS). The CMS throughput demands are high compared to the amount of data stored on the file system.
What is the appropriate EFS configuration in this case?
A. Choose Bursting Throughput mode for the file system.
B. Start with the General Purpose performance mode and update thefile system toMax I/O if it reaches its I/O limit.
C. Start withBursting Throughput mode and update thefile system toMax I/O if it reaches its I/O limit.
D. Choose Provisioned Throughput mode for the file system.
q
A company’s container applications are managed with Kubernetes and hosted on Windowsvirtual servers. The companywants to migrate these applications to the AWS cloud, and needs a solution that supports Kubernetes pods hosted on Windows servers.
The solutionmust manage theKubernetes API servers and the etcd cluster. The company’s development team would prefer that AWS manage the host instances and containersas much as possible, but is willing tomanage themboth if necessary.
Which AWS service offers the best options for the developer’s preferences and thecompany’s essential requirement for their container application?
A. Amazon Elastic Compute Cloud (EC2)
B. Amazon Elastic Kubernetes Service (EKS) on AWS Fargate
C. Amazon Elastic Kubernetes Service (EKS) with self-managed node groups
D. Amazon Elastic Kubernetes Service (EKS) with EKS-managed node groups
A company maintains an on-premises data center and performs daily data backups to on-disk and tape storage to comply with regulatory requirements.The IT department is looking for an AWS cloud solution to back up its data.The IT department responsible for this project plans to continue maintaining the primarydata on-site and is looking for an AWS cloud solution for data backupthat will work well with their current archiving process.
Which of the following AWS storage services should the team choose to manage its data backup requirements?
A. AWS Backup
B. AWS Tape Gateway
C. AWS VolumeGateway
D. AWS FileGateway
You are responsible for setting up a new Amazon EFS file system.The organization’s security policies mandate that the file system storeall datain an encrypted form.The organization does not needto control key rotation or policies regarding access to the KMS key.
What steps should you take to ensure the data is encrypted at rest in this scenario?
j
A. When mounting the EFS filesystem to an EC2 instance, use the default AWS-managed KMS key to encrypt the data.
B. When creating the EFS filesystem enable encryption using acustomer-managed KMS key.
C. When creating the EFS filesystem enable encryption using the default AWS-managed KMS key for Amazon EFS.
D. When mounting the EFS filesystem to an EC2 instance, use a customer-managed KMS key to encrypt the data.
You are placed in charge of your company’s cloud storage and need to deploy empty EBS volumes. You are concerned about an initial performance hit when the new volumes are first accessed.
What steps should you take to ensure peak performance when the empty EBS volumes are first accessed?
A. Enable fast snapshot restore
B. Creating a RAID 0 array
C. Do nothing - empty EBS volumes do not require initialization
D. Force the immediate initialization of the entire volume
While building your environment on AWS you decide to use Key Management Service to help you manage encryption of data volumes. As part of your architecture you design a disaster recovery environment in a second region.
What should you anticipate in your architecture regarding the use of KMS in this environment?
A. KMS is not highly available by default; you have to make sure you span KMS across at least two availability zones to avoid single points of failure.
B. KMS is a global service, your architecture must account for regularly migrating encryption keys across regions to allow disaster recovery environment to decrypt volumes.
C. KMS is highly available within the region; to make it span across multiple regions you have to connect primary and DR environments with a Direct Connect line.
D. KMS keys can operate on a multi-region scope, but AWS recommends region-specific keys for most cases.
The IT department at a pharmaceutical company plans to reduce the size of one of itsdata centers and needs to migrate some of thedata stored on a network file system to the Amazon cloud.
After the team migrates the files to the cloud, scientists and on-premises applications still need access to these resources as if they were still on site.The team is looking foran automated service thatthey can use to transfer the assets to the cloud and then continue accessing the files from on-premises after migration.
Which combination of AWS services is the appropriate choice to migrate data from an on-premises network file system and continue to access these files in the cloud seamlessly from on-premises?
j
A. Use AWS Batchto migrate the data and AWS Direct Connect to enable on-premises access to files in the AWS cloud.
B. Use AWS DataSync to migrate the data and AWS Storage Gateway (File Gateway)to enable on-premises access to files in the AWS cloud.
C. Use AWS Backup to migrate the data and AWS Storage Gateway (File Gateway)to enable on-premises access to files in the AWS cloud.
D. Use AWS Storage Gatewayto migrate the data and AWS Direct Connect to enable on-premises access to files in the AWS cloud.
You are working on a project that involves several AWS resources that will be protected by cryptographic keys. You decided to create these keys using AWS Key Management Services (KMS) and you will need to evaluate the security cost across resources and projects.
How will you easily categorize the security keys’ cost?
A. To each key, add a tag and specify the tag key and tag value. Aggregate the costs by tags.
B. To each key, add a description and specify the reason for creating this key. Aggregate the costs by descriptions.
C. Create asymmetric keys and you will be able to aggregate the costs by resources and projects,
D. After creating the keys, use AWS Organizations to obtain costs across resources and projects.
You are the AWS account owner for a small IT company, with a team of developers assigned to your account as IAM users within existing IAM groups.
New associate-level developers manage resources in theDev/Test environments, and these resources are quickly launched, used, and then deleted to save on resource costs.The new developershaveread-only permissions in the production environment.
There isa complex existing set of buckets intended toseparate Development and Test resources from Production resources, but you know this policy of separation between environments is not followed at all times. Your company needs to prevent new developers from accessing production environment files placed in anincorrect S3 bucket because these production-level objects are accidentally deleted along with other Dev/Test S3 objects.
The ideal solution will prevent existing objects from being accidentally deleted andautomatically minimize the problem in the future.
What stepsare the most efficienttocontinuously enforce the tagging best practicesand apply the principle of least privilege within Amazon S3? (Choose 2 answers)
A. Assign IAM policiesto the Dev/TestIAM group that authorizeS3 object operation based on object tags.
B. Update all existing object tags to correctly reflect their environment using Amazon S3 batch operations.
C. Implement an object tagging policy usingAWS Config’s Auto Remediation feature.
D. Create an AWS Lambda function to check object tags for each new Amazon S3 object. An incorrect tag would trigger an additional Lambda function to fix the tag.
You are deploying a two-tieredweb application with web servers hosted on Amazon EC2 in a public subnet of your VPC and your database tier hosted on RDS instances isolated in a private subnet.
Your requirements call for the web tier to be highly available. Which services listed will be needed to make the web-tier highly available?(Choose 3 answers)
A. EC2 Auto Scaling
B. Elastic Load Balancer
C. Route 53
D. Multi-AZ for RDS
A startup company currently stores all documents in S3. At the beginning of last year, they created a bonus policy, but after a long year of creating and storing further documentation, it seems to be lost in your S3 bucket.
Which of the following services could most easily help you find the bonus policy document?
A. AWS Kendra
B. AWS Rekognition
C. Amazon Comprehend
D. Amazon S3 Search
C. Amazon Comprehend
Explanation:
Amazon Comprehend can find documents about a particular subject using topic modeling, scan a set of documents to determine the topics discussed, and find the documents associated with each topic.
The remaining choices are incorrect for the following reasons:
S3 Search is not an AWS service AWS Kendra searches unstructured data and can be used for S3 but requires a greater setup process than Comprehend AWS Rekognition is used for image and video analysis
A companyis configuring its new AWS Organization and has implemented an allow list strategy.Now the company needs to grant special permissions to a singleAWS accountinthe Development organizational unit(OU).
All AWS users within thissingle AWS accountneed to be granted full access to Amazon EC2. Other AWS accounts within the Development OU will not have full access to Amazon EC2. Certain accounts within the Development OU will have partial access to EC2 as needed.
The IT Security department has applied a service control policy (SCP) to the organization’s root accountthat allows AmazonEC2FullAccess.
What choice below includes all the necessary stepsto grant full EC2 access only toAWS users in this single AWS account?
A. Apply an SCPgranting AmazonEC2FullAccess tothe Development OU and the specific AWS account. Apply theAmazonEC2FullAccess IAM policy to all IAM users in the account.
B. Apply theAmazonEC2FullAccess IAM policy to all IAM users in the account.
C. Apply an SCPgranting AmazonEC2FullAccess tothe Development OU and the specific AWS account. Apply a separate SCP denyingEC2 access to all other AWS accounts within the Development OU.
D. Apply an SCPgranting AmazonEC2FullAccess tothe Development OU and the specific AWS account.
A. Apply an SCPgranting AmazonEC2FullAccess tothe Development OU and the specific AWS account. Apply theAmazonEC2FullAccess IAM policy to all IAM users in the account.
Explanation:
Inheritance for service control policies behaves like a filter through which permissions flow to all parts of the tree below. To allow an AWS service API at the member account level, you must allow that API at every level between the member account and the root of your organization. You must attach SCPs to every level from your organization’s root to the member account that allows the given AWS service API (such as EC2 Full Access or S3 Full Access). An allow list strategy has you remove the FullAWSAccess SCP that is attached by default to every OU and account. This means that no APIs are permitted anywhere unless you explicitly allow them. To allow a service API to operate in an AWS account, you must create your own SCPs and attach them to the account and every OU above it, up to and including the root. Every SCP in the hierarchy, starting at the root, must explicitly allow the APIs that you want to be usable in the OUs and accounts below it.
Users and roles in accounts must still be granted permissions using AWS Identity and Access Management (IAM) permission policies attached to them or to groups. The SCPs only determine what permissions are available to be granted by such policies. The user can’t perform any actions that the applicable SCPs don’t allow.
Multiple AWS accounts within a company’s AWS Organization are managing separatewebsites in EC2 instances behind Application Load Balancers withstatic content and user-generated content storedin S3 buckets behind CloudFront web distributions.
The engineering team wants to protect these vulnerableresources from common web attacks, such as SQL injection, cross-site scripting, and DDoS attacks. Currently, each AWS account allows different types of traffic usingAWS Web Application Firewall (WAF).At the same time, they want to use an approach that will allow them to protect new EC2 instances and CloudFront distributions that will beadded in the future.
What would be an effective and efficientapproach to meet this requirement?
A. Create a set of AWS Web Application Firewall (WAF) rules for account managers for each relevant AWS account to deploy and associate aweb ACL to every EC2 instance and S3 bucket.
B. Associate AWS Shield Advanced withevery Application Load Balancer and CloudFront distribution.
C. Create a service control policy (SCP) to deny all IAM users’ organizational units (OUs) access to AWS WAF. Allow only AWS account root users to modify or create firewall rules with AWS WAF.
D. Tag web application resources such as EC2 instances and CloudFront distributions with resource tags based on their security requirements.Using Firewall Manager, add appropriate AWSWAFrules for each resourcetag.
D. Tag web application resources such as EC2 instances and CloudFront distributions with resource tags based on their security requirements.Using Firewall Manager, add appropriate AWSWAFrules for each resourcetag.
Explanation:
AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for AWS WAF, AWS Shield Advanced, Amazon VPC security groups, and AWS Network Firewall. With Firewall Manager, you set up your AWS WAF firewall rules, Shield Advanced protections, Amazon VPC security groups, and Network Firewall firewalls just once. The service automatically applies the rules and protections across your accounts and resources, even as you add new resources. A prerequisite to using AWS Firewall Manager is to use AWS Organization, with all features enabled.
Using Firewall Manager you define the WAF rules in a single place and assign those rules to resources containing a specific tag or resources of a specific type, like CloudFront distributions. Firewall Manager is particularly useful when you want to protect your entire organization rather than a small number of specific accounts and resources, or if you frequently add new resources that you want to protect. Firewall Manager also provides centralized monitoring of DDoS attacks across your organization.
AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for AWS WAF, AWS Shield Advanced, Amazon VPC security groups, and AWS Network Firewall. With Firewall Manager, you set up your AWS WAF firewall rules, Shield Advanced protections, Amazon VPC security groups, and Network Firewall firewalls just once. The service automatically applies the rules and protections across your accounts and resources, even as you add new resources. A prerequisite to usingAWS Firewall Manager is to use AWS Organization, with all features enabled.
Using Firewall Manager you define the WAF rules in a single place and assign those rules to resources containing a specific tag or resources of a specific type, like CloudFront distributions. Firewall Manager is particularly useful when you want to protect your entire organization rather than a small number of specific accounts and resources, or if you frequently add new resources that you want to protect. Firewall Manager also provides centralized monitoring of DDoS attacks across your organization.