MyCloudGuru Practice Exam Flashcards
Design Resilient Architectures
You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling groups that you need to create. One requirement is that you need to reuse some software licenses and therefore need to use dedicated hosts on EC2 instances in your Auto Scaling groups. What step must you take to meet this requirement?
A) Create the Dedicated Host EC2 instances, and then add them to an existing Auto Scaling group.
B) Use a launch template with your Auto Scaling group and select the Dedicated Host option.
C) Create your launch configuration, but manually change the instances to Dedicated Hosts in the EC2 console.
D_ Make sure your launch configurations are using Dedicated Hosts.
B) Use a launch template with your Auto Scaling group and select the Dedicated Host option.
In addition to the features of Amazon EC2 Auto Scaling that you can configure by using launch templates, launch templates provide more advanced Amazon EC2 configuration options. For example, you must use launch templates to use Amazon EC2 Dedicated Hosts. Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use. While Amazon EC2 Dedicated Instances also run on dedicated hardware, the advantage of using Dedicated Hosts over Dedicated Instances is that you can bring eligible software licenses from external vendors and use them on EC2 instances.
If you currently use launch configurations, you can specify a launch template when you update an Auto Scaling group that was created using a launch configuration.
To create a launch template to use with an Auto Scaling group, create the template from scratch, create a new version of an existing template, or copy the parameters from a launch configuration, running instance, or other template.
Specify Secure Applications and Architectures
You have been evaluating the NACLs in your company. Currently, you are looking at the default network ACL. Which statement is true about NACLs?
A)The default configuration of the default NACL is Deny, and the default configuration of a custom NACL is Allow.
B) The default configuration of the default NACL is Allow, and the default configuration of a custom NACL is Allow.
C) The default configuration of the default NACL is Allow, and the default configuration of a custom NACL is Deny.
D) The default configuration of the default NACL is Deny, and the default configuration of a custom NACL is Deny.
C) The default configuration of the default NACL is Allow, and the default configuration of a custom NACL is Deny.
Your VPC automatically comes with a modifiable default network ACL. By default, it allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic. You can create a custom network ACL and associate it with a subnet. By default, each custom network ACL denies all inbound and outbound traffic until you add rules.
Specify Secure Applications and Architectures
A consultant is hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. The consultant has launched several instances, created security groups, and has associated security groups with instances. The consultant wants to change the security groups for an instance. Which statement is true?
A) You can change the security groups for an instance when the instance is in the pending or stopped state.
B) You can change the security groups for an instance when the instance is in the running or stopped state.
C) You can’t change security groups. Create a new instance and attach the desired security groups.
D) You can’t change the security groups for an instance when the instance is in the running or stopped state.
B) You can change the security groups for an instance when the instance is in the running or stopped state.
After you launch an instance into a VPC, you can change the security groups that are associated with the instance. You can change the security groups for an instance when the instance is in the running or stopped state.
Design Resilient Architectures
You have two EC2 instances running in the same VPC, but in different subnets. You are removing the secondary ENI from an EC2 instance and attaching it to another EC2 instance. You want this to be fast and with limited disruption. So you want to attach the ENI to the EC2 instance when it’s running. What is this called?
A) synchronous attach
B) warm attach
C) cold attach
D) hot attach
D) hot attach
Here are some best practices for configuring network interfaces. You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from one instance to another if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead.
Design Cost-Optimized Architectures
You work for a Defense contracting company. The company develops software applications which perform intensive calculations in the area of Mechanical Engineering related to metals for ship building. The company competes for and wins contracts that typically range from 1 year to up to 5 years. These long-term contracts mean that the duration of your need for EC2 instances can be matched to the length of these contracts, and then extended if necessary. The main requirement is consistent performance for the duration of the contract. Which EC2 purchasing option provides the best value, given these long-term contracts?
A) On-Demand
B) Reserved
C) Spot
D) Dedicated Host
B) Reserved
Longer-term contracts such as this are ideally suited to gain maximum value by using reserved instances.
Amazon EC2 provides the following purchasing options to enable you to optimize your costs based on your needs: On-Demand Instances – Pay, by the second, for the instances that you launch. Savings Plans – Reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years. Reserved Instances – Reduce your Amazon EC2 costs by making a commitment to a consistent instance configuration, including instance type and region, for a term of 1 or 3 years. Scheduled Instances – Purchase instances that are always available on the specified recurring schedule, for a one-year term. Spot Instances – Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly. Dedicated Hosts – Pay for a physical host that is fully dedicated to running your instances, and bring your existing per-socket, per-core, or per-VM software licenses to reduce costs. Dedicated Instances – Pay, by the hour, for instances that run on single-tenant hardware. Capacity Reservations – Reserve capacity for your EC2 instances in a specific Availability Zone for any duration.
Design Cost-Optimized Architectures
You have joined a newly formed software company as a Solutions Architect. It is a small company, and you are the only employee with AWS experience. The owner has asked for your recommendations to ensure that the AWS resources are deployed to proactively remain within budget. Which AWS service can you use to help ensure you don’t have cost overruns for your AWS resources?
A) AWS Budgets
B) Cost Explorer
C) Inspector
D) Billing and Cost Management
A) AWS Budgets
AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. And remember the keyword, proactively. With AWS Budgets, we can be proactive about attending to cost overruns before they become a major budget issue at the end of the month or quarter. Budgets can be tracked at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. Budget alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic. You can also use AWS Budgets to set a custom reservation utilization target and receive alerts when your utilization drops below the threshold you define. RI utilization alerts support Amazon EC2, Amazon RDS, Amazon Redshift, and Amazon ElastiCache reservations. Budgets can be created and tracked from the AWS Budgets dashboard, or via the Budgets API.
Define Performance Architectures
A software company is developing an online “learn a new language” application. The application will be designed to teach up to 20 different languages for native English and Spanish speakers. It should leverage a service that is capable to keep up with 24,000 read units per second and 3,300 write units per second, and scale for spikes and off-peak. The application will also need to store user progress data. Which AWS service would meet these requirements?
A) DynamoDB
B) RDS
C) S3
D) EBS
A) DynamoDB
Duolingo uses Amazon DynamoDB to store 31 billion items in support of an online learning site that delivers lessons for 80 languages. The U.S. startup reaches more than 18 million monthly users around the world who perform more than six billion exercises using the free Duolingo lessons. The company relies heavily on Amazon DynamoDB not just for its highly scalable database, but also for high performance that reaches 24,000 read units per second and 3,300 write units per second. In addition, Duolingo uses a range of other AWS services such as Amazon EC2, based on the latest Intel Xeon Processor Family, for compute Amazon ElastiCache to increase performance; Amazon S3 for storing image-related data; and Amazon Relational Database Service (Amazon RDS) for permanent data storage. Moving forward, Duolingo plans on leveraging AWS Elastic Beanstalk and AWS Lambda for its microservices architecture, as well as Amazon Redshift for its data analytics.
Specify Secure Applications and Architectures
Your company has gone through an audit with a focus on data storage. You are currently storing historical data in Amazon Glacier. One of the results of the audit is that a portion of the infrequently-accessed historical data must be able to be accessed immediately upon request. Where can you store this data to meet this requirement?
A) Store the data in EBS
B) Leave infrequently-accessed data in Glacier.
C) S3 Standard-IA
D) S3 Standard
C) S3 Standard-IA
S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low-per-GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. S3 Storage Classes can be configured at the object level and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
Design Cost-Optimized Architectures
You are working in a large healthcare facility which uses EBS volumes on most of the EC2 instances. The CFO has approached you about some cost savings and it has been decided that some of the EC2 instances and EBS volumes would be deleted. What step can be taken to preserve the data on the EBS volumes and keep the data available on short notice?
A) Move the data to Amazon S3.
B) Take point-in-time snapshots of your Amazon EBS volumes.
C) Store the data in CloudFormation user data.
D) Archive the data to Glacier.
B) Take point-in-time snapshots of your Amazon EBS volumes.
You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all of the information that is needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume.
Define Performance Architectures
You are designing an architecture which will house an Auto Scaling Group of EC2 instances. The application hosted on the instances is expected to be extremely popular. Forecasts for traffic to this site expect very high traffic and you will need a load balancer to handle tens of millions of requests per second while maintaining high throughput at ultra low latency. You need to select the type of load balancer to front your Auto Scaling Group to meet this high traffic requirement. Which load balancer will you select?
A) You will need an Application Load Balancer to meet this requirement.
B) You will need a Classic Load Balancer to meet this requirement.
C) All the AWS load balancers meet the requirement and perform the same.
D) You will select a Network Load Balancer to meet this requirement.
D) You will select a Network Load Balancer to meet this requirement.
If extreme performance is needed for your application, AWS recommends that you use a Network Load Balancer. Network Load Balancer operates at the connection level (Layer 4), routing connections to targets (Amazon EC2 instances, microservices, and containers) within Amazon VPC, based on IP protocol data. Ideal for load balancing of both TCP and UDP traffic, Network Load Balancer is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load Balancer is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone. It is integrated with other popular AWS services such as Auto Scaling, Amazon EC2 Container Service (ECS), Amazon CloudFormation, and AWS Certificate Manager (ACM).
Design Cost-Optimized Architectures
You are consulting for a state agency focused on the state lottery. You have been given a task to have 2,000,000 bar codes created as quickly as possible. This will require EC2 instances and an average CPU utilization of 70% for each of them. So you plan to spin up 10 EC2 instances to create the bar codes. You estimate that the instances will complete the job from around 11pm to 1am. You don’t want the instances sitting idle for up to 9 hours until the next morning. What can you do to terminate these instances when they are done?
A) Write a cron job which queries the instance status. If a certain status is met, have the cron job kick off CloudFormation to terminate the existing instance, and create a new instance from a template.
B) Write a Python script which queries the instance status. Also write a Lambda function which can be triggered upon a certain status and terminate the instance.
C) Write a cron job which queries the instance status. Also write a Lambda function which can be triggered upon a certain status and terminate the instance.
D) You can create a CloudWatch alarm that is triggered when the average CPU utilization percentage has been lower than 10 percent for 4 hours, and terminates the instance.
D) You can create a CloudWatch alarm that is triggered when the average CPU utilization percentage has been lower than 10 percent for 4 hours, and terminates the instance.
Adding Terminate Actions to Amazon CloudWatch Alarms: You can create an alarm that terminates an EC2 instance automatically when a certain threshold has been met (as long as termination protection is not enabled for the instance). For example, you might want to terminate an instance when it has completed its work, and you don’t need the instance again. If you might want to use the instance later, you should stop the instance instead of terminating it. For information about enabling and disabling termination protection for an instance, see Enabling Termination Protection for an Instance
Design Resilient Architectures
A financial institution has an application that produces huge amounts of actuary data, which is ultimately expected to be in the terabyte range. There is a need to run complex analytic queries against terabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Which service will best meet this requirement?
A) RDS
B) Elasticache
C) DynamoDB
D) Redshift
D) Redshift
Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It enables you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale-out to petabytes of data for $1,000 per terabyte per year, less than a tenth of the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.
Specify Secure Applications and Architectures
You are about to configure two EC2 instances in your VPC. The instances will be in different subnets, but in the same Availability Zone. The first instance will house the main company website and will need to be able to communicate with the database that will be housed on the second instance. What steps can you take to make sure the instances will be able to communicate properly? Choose two.
A) Put the instances in the same placement group.
B) Make sure all security groups allow communication between the app and database on the correct port using the proper protocol.
C) Configure a Virtual Private Gateway.
D) Make sure the NACL allows communication between the two subnets.
E) Make sure each instance has an elastic IP address.
B) Make sure all security groups allow communication between the app and database on the correct port using the proper protocol.
The proper ingress on both the security groups and NACL need to be configured to allow communication between these instances.
D) Make sure the NACL allows communication between the two subnets.
The proper ingress on both the Security Groups and NACL need to be configured to allow communication between these instances.
Define Performance Architectures
An Application Load Balancer is fronting an Auto Scaling Group of EC2 instances, and the instances are backed by an RDS database. The Auto Scaling Group has been configured to use the Default Termination Policy. You are testing the Auto Scaling Group and have triggered a scale-in. Which instance will be terminated first?
A) The instance for which the load balancer stops sending traffic.
B) The longest running instance.
C) The instance launched from the oldest launch configuration.
D) The Auto Scaling Group will randomly select an instance to terminate.
C) The instance launched from the oldest launch configuration.
The ASG is using the Default Termination Policy. The default termination policy is designed to help ensure that your instances span Availability Zones evenly for high availability. The default policy is kept generic and flexible to cover a range of scenarios. The default termination policy behavior is as follows: Determine which Availability Zones have the most instances, and at least one instance that is not protected from scale in. Determine which instances to terminate so as to align the remaining instances to the allocation strategy for the on-demand or spot instance that is terminating. This only applies to an Auto Scaling Group that specifies allocation strategies. For example, after your instances launch, you change the priority order of your preferred instance types. When a scale-in event occurs, Amazon EC2 Auto Scaling tries to gradually shift the on-demand instances away from instance types that are lower priority. Determine whether any of the instances use the oldest launch template or configuration: [For Auto Scaling Groups that use a launch template] Determine whether any of the instances use the oldest launch template unless there are instances that use a launch configuration. Amazon EC2 Auto Scaling terminates instances that use a launch configuration before instances that use a launch template. [For Auto Scaling Groups that use a launch configuration] Determine whether any of the instances use the oldest launch configuration. After applying all of the above criteria, if there are multiple unprotected instances to terminate, determine which instances are closest to the next billing hour. If there are multiple unprotected instances closest to the next billing hour, terminate one of these instances at random.
Define Performance Architectures
A large financial institution is gradually moving their infrastructure and applications to AWS. The company has data needs that will utilize all of RDS, DynamoDB, Redshift, and ElastiCache. Which description best describes Amazon Redshift?
A) Cloud-based relational database.
B) Can be used to significantly improve latency and throughput for many read-heavy application workloads.
C) Near real-time complex querying on massive data sets.
D) Key-value and document database that delivers single-digit millisecond performance at any scale.
C) Near real-time complex querying on massive data sets.
Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale out to petabytes of data for $1,000 per terabyte per year, less than a tenth the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.
Define Performance Architectures
A professional baseball league has chosen to use a key-value and document database for storage, processing, and data delivery. Many of the data requirements involve high-speed processing of data such as a Doppler radar system which samples the position of the baseball 2000 times per second. Which AWS data storage can meet these requirements?
A) S3
B) DynamoDB
C) RDS
D) Redshift
B) DynamoDB
Amazon DynamoDB is a NoSQL database that supports key-value and document data models, and enables developers to build modern, serverless applications that can start small and scale globally to support petabytes of data and tens of millions of read and write requests per second. DynamoDB is designed to run high-performance, internet-scale applications that would overburden traditional relational databases.
Define Performance Architectures
A gaming company is designing several new games which focus heavily on player-game interaction. The player makes a certain move and the game has to react very quickly to change the environment based on that move and to present the next decision for the player in real-time. A tool is needed to continuously collect data about player-game interactions and feed the data into the gaming platform in real-time. Which AWS service can best meet this need?
A) AWS Lambda
B) Kinesis Data Analytics
C) Kinesis Data Streams
D) AWS IoT
C) Kinesis Data Streams
Kinesis Data Streams can be used to continuously collect data about player-game interactions and feed the data into your gaming platform. With Kinesis Data Streams, you can design a game that provides engaging and dynamic experiences based on players’ actions and behaviors
Specify Secure Applications and Architectures
A new startup company decides to use AWS to host their web application. They configure a VPC as well as two subnets within the VPC. They also attach an internet gateway to the VPC. In the first subnet, they create the EC2 instance which will host their web application. They finish the configuration by making the application accessible from the Internet. The second subnet hosts their database and they don’t want the database accessible from the Internet. Which statement best describes this scenario?
A) The web server is in a private subnet, and the database server is in a public subnet. The public subnet has a route to the internet gateway in the route table.
B) The web server is in a private subnet, and the database server is in a private subnet. A third subnet has a route to the Internet Gateway, which allows internet access.
C) The web server is in a public subnet, and the database server is in a public subnet. The public subnet has a route to the internet gateway in the route table.
D) The web server is in a public subnet, and the database server is in a private subnet. The public subnet has a route to the internet gateway in the route table.
D) The web server is in a public subnet, and the database server is in a private subnet. The public subnet has a route to the internet gateway in the route table.
An internet gateway is a horizontally-scaled, redundant, and highly available VPC component that allows communication between your VPC and the Internet. An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. An internet gateway supports IPv4 and IPv6 traffic. It does not cause availability risks or bandwidth constraints on your network traffic. To enable access to or from the Internet for instances in a subnet in a VPC, you must do the following:
Attach an internet gateway to your VPC.
Add a route to your subnet’s route table that directs internet-bound traffic to the internet gateway. If a subnet is associated with a route table that has a route to an internet gateway, it’s known as a public subnet. If a subnet is associated with a route table that does not have a route to an internet gateway, it’s known as a private subnet.
Ensure that instances in your subnet have a globally-unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance.
Design Resilient Architectures
You suspect that one of the AWS services your company is using has gone down. How can you check on the status of this service?
A) Amazon Inspector
B) AWS Trusted Advisor
C) AWS Organizations
D) AWS Personal Health Dashboard
D) AWS Personal Health Dashboard
Correct. AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view of the performance and availability of the AWS services underlying your AWS resources. The dashboard displays relevant and timely information to help you manage events in progress, and provides proactive notification to help you plan for scheduled activities. With Personal Health Dashboard, alerts are triggered by changes in the health of AWS resources, giving you event visibility and guidance to help quickly diagnose and resolve issues.
Define Performance Architectures
A large, big-box hardware chain is setting up a new inventory management system. They have developed a system using IoT sensors which captures the removal of items from the store shelves in real-time and want to use this information to update their inventory system. The company wants to analyze this data in the hopes of being ahead of demand and properly managing logistics and delivery of in-demand items.
Which AWS service can be used to capture this data as close to real-time as possible, while being able to both transform and load the streaming data into Amazon S3 or Elasticsearch?
A) Amazon Aurora
B) Kinesis Data Firehose
C) Kinesis Streams
D) Redshift
B) Kinesis Data Firehose
Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near-real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully-managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, transform, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
Design Resilient Architectures
Your company has recently converted to a hybrid cloud environment and will slowly be migrating to a fully AWS cloud environment. The AWS side is in need of some steps to prepare for disaster recovery. A disaster recovery plan needs to be drawn up and disaster recovery drills need to be performed for compliance reasons. The company wants to establish Recovery Time and Recovery Point Objectives. The RTO and RPO can be pretty relaxed. The main point is to have a plan in place, with as much cost savings as possible. Which AWS disaster recovery pattern will best meet these requirements?
A) Multi Site
B) Warm Standby
C) Pilot Light
D) Backup and restore
D) Backup and restore
This is the least expensive option and cost is the overriding factor.
Specify Secure Applications and Architectures
Several S3 Buckets have been deleted and a few EC2 instances have been terminated. Which AWS service can you use to determine who took these actions?
A) AWS CloudWatch
B) Trusted Advisor
C) AWS Inspector
D) AWS CloudTrail
D) AWS CloudTrail
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. In addition, you can use CloudTrail to detect unusual activity in your AWS accounts. These capabilities help simplify operational analysis and troubleshooting.
Specify Secure Applications and Architectures
A new startup company decides to use AWS to host their web application. They configure a VPC as well as two subnets within the VPC. They also attach an internet gateway to the VPC. In the first subnet, they create the EC2 instance which will host their web application. They finish the configuration by making the application accessible from the Internet. The second subnet has an instance hosting a smaller, secondary application. But this application is not currently accessible from the Internet. What could be potential problems?
A) The EC2 instance is not attached to an internet gateway.
B) The second subnet does not have a route in the route table to the internet gateway.
C) The second subnet does not have a route in the route table to the virtual private gateway.
D) The second subnet does not have a public IP address.
E) The EC2 instance does not have a public IP address.
B) The second subnet does not have a route in the route table to the internet gateway.
E) The EC2 instance does not have a public IP address.
To enable access to or from the internet for instances in a subnet in a VPC, you must do the following:
Attach an internet gateway to your VPC.
Add a route to your subnet’s route table that directs internet-bound traffic to the internet gateway. If a subnet is associated with a route table that has a route to an internet gateway, it’s known as a public subnet. If a subnet is associated with a route table that does not have a route to an internet gateway, it’s known as a private subnet.
Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance.
Specify Secure Applications and Architectures
An organization of about 100 employees has performed the initial setup of users in IAM. All users except administrators have the same basic privileges. But now it has been determined that 50 employees will have extra restrictions on EC2. They will be unable to launch new instances or alter the state of existing instances. What will be the quickest way to implement these restrictions?
A) Create an IAM Role for the restrictions. Attach it to the EC2 instances.
B) Create the appropriate policy. Create a new group for the restricted users. Place the restricted users in the new group and attach the policy to the group.
C) Create the appropriate policy. With only 20 users, attach the policy to each user.
D) Create the appropriate policy. Place the restricted users in the new policy.
B) Create the appropriate policy. Create a new group for the restricted users. Place the restricted users in the new group and attach the policy to the group.
You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies. IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, if a policy allows the GetUser action, then a user with that policy can get user information from the AWS Management Console, the AWS CLI, or the AWS API. When you create an IAM user, you can choose to allow console or programmatic access. If console access is allowed, the IAM user can sign in to the console using a user name and password. Or if programmatic access is allowed, the user can use access keys to work with the CLI or API.
Define Performance Architectures
An application is hosted on an EC2 instance in a VPC. The instance is in a subnet in the VPC, and the instance has a public IP address. There is also an internet gateway and a security group with the proper ingress configured. But your testers are unable to access the instance from the Internet. What could be the problem?
A) Make sure the instance has a private IP address.
B) A NAT gateway needs to be configured.
C) A virtual private gateway needs to be configured.
D) Add a route to the route table, from the subnet containing the instance, to the Internet Gateway.
D) Add a route to the route table, from the subnet containing the instance, to the Internet Gateway.
The question doesn’t state if the subnet containing the instance is public or private. An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet.
In your subnet route table, you can specify a route for the internet gateway to all destinations not explicitly known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6). Alternatively, you can scope the route to a narrower range of IP addresses. For example, the public IPv4 addresses of your company’s public endpoints outside of AWS, or the elastic IP addresses of other Amazon EC2 instances outside your VPC. To enable communication over the Internet for IPv4, your instance must have a public IPv4 address or an Elastic IP address that’s associated with a private IPv4 address on your instance. Your instance is only aware of the private (internal) IP address space defined within the VPC and subnet. The internet gateway logically provides the one-to-one NAT on behalf of your instance so that when traffic leaves your VPC subnet and goes to the Internet, the reply address field is set to the public IPv4 address or elastic IP address of your instance and not its private IP address. Conversely, traffic that’s destined for the public IPv4 address or elastic IP address of your instance has its destination address translated into the instance’s private IPv4 address before the traffic is delivered to the VPC. To enable communication over the Internet for IPv6, your VPC and subnet must have an associated IPv6 CIDR block, and your instance must be assigned an IPv6 address from the range of the subnet. IPv6 addresses are globally unique, and therefore public by default.
Design Resilient Architectures
You have configured an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer and backed by an RDS database. You want to begin monitoring the EC2 instances using CloudWatch metrics. Which metric is not readily available out of the box?
A) DiskReadOps
B) NetworkIn
C) CPU utilization
D) Memory utilization
D) Memory utilization
Memory utilization is not available as an out of the box metric in CloudWatch. You can, however, collect memory metrics when you configure a custom metric for CloudWatch. Types of custom metrics that you can set up include:
Memory utilization Disk swap utilization Disk space utilization Page file utilization Log collection