Resilient Architecture part 1 Flashcards
A small business specializing in video processing wants to prototype cloud storage in order to lower its costs. However, management is wary of storing its client files in the cloud rather than on premises. They are focused on cost savings and experimenting with the cloud at this time. What is the best solution for their prototype?
A) Install a VPN, set up an S3 bucket for their files created within the last month, and set up an additional S3-IA bucket for older files. Create a lifecycle policy in S3 to move files older than 30 days into the S3-IA bucket nightly.
B) Install an AWS storage gateway using stored volumes.
C) Set up a Direct Connect and back all local hard drives up to S3 over the Direct Connect nightly.
D)Install an AWS storage gateway using cached volumes.
<p>B. Anytime the primary consideration is storage with a local data presence—where data must be stored or seen to be stored locally—a storage gateway gives you the best option. This reduces the choices to B and D. B will store the files in S3 and provide local cached copies, while D will store the files locally and push them to S3 as a backup. Since management is concerned about storage in the cloud of primary files, B is the best choice; local files are the primary source of data, while still allowing the company to experiment with cloud storage without “risking” its data being stored primarily in the cloud.</p>
<p>For which of the following HTTP methods does S3 have eventual consistency? (Choose two.) PUTs of new objects UPDATEs DELETEs PUTs that overwrite existing objects</p>
<p>C, D. PUTs of new objects have a read after write consistency. DELETEs and overwrite PUTs have eventual consistency across S3.</p>
<p>What is the smallest file size that can be stored on standard class S3? 1 byte 1 MB 0 bytes 1 KB</p>
<p>C. First, note that “on standard class S3” is a red herring, and irrelevant to the question. Second, objects on S3 can be 0 bytes. This is equivalent to using touch on a file and then uploading that 0-byte file to S3.</p>
<p>You’ve just created a new S3 bucket named ytmProfilePictures in the US East 2 region and created a folder at the root level of the bucket called images/. You’ve turned on website hosting and asked your content team to upload images into the images/ folder. At what URL will these images be available through a web browser?
https: //s3-us-east-2.amazonaws.com/ytmProfilePictures/images
https: //s3-website-us-east-2.amazonaws.com/ytmProfilePictures/images
https: //ytmProfilePictures.s3-website-us-east-2.amazonaws.com/images
https: //ytmProfilePictures.s3-website.us-east-2.amazonaws.com/images</p>
<p>You’ve just created a new S3 bucket named ytmProfilePictures in the US East 2 region and created a folder at the root level of the bucket called images/. You’ve turned on website hosting and asked your content team to upload images into the images/ folder. At what URL will these images be available through a web browser?https://s3-us-east-2.amazonaws.com/ytmProfilePictures/imageshttps://s3-website-us-east-2.amazonaws.com/ytmProfilePictures/imageshttps://ytmProfilePictures.s3-website-us-east-2.amazonaws.com/imageshttps://ytmProfilePictures.s3-website.us-east-2.amazonaws.com/images</p>
<p>Which of the following statements is not true?
Standard S3, S3-IA, and S3 One Zone-IA all are equally durable.
The availability of S3-IA and S3 One Zone-IA are identical.
Standard S3, S3-IA, and S3 One Zone-IA all have different availabilities.
S3 One Zone-IA is as durable as standard S3.</p>
<p>B. This is an important distinction when understanding S3 classes. Standard S3, S3-IA, and S3 One Zone-IA all are equally durable, although in One Zone-IA, data will be lost if the availability zone is destroyed. Each class has different availability, though: S3 is 99.99, S3-IA is 99.9, and S3 One Zone-IA is 99.5. Therefore, it is false that all have the same availability (B).</p>
<p>Which of the following AWS services appear in the AWS console across all regions? (Choose two.) S3 EC2 IAM RDS</p>
<p>A, C. The wording of this question is critical. S3 buckets are created within a region, but the AWS console and your account will show you all S3 buckets at all times. While a bucket is created in a specific region, names of buckets are also global. IAM permissions are also global and affect all regions. RDS and EC2 instances are region specific, and only appear in the regions in which they were created in the AWS console.</p>
<p>You have an S3 bucket and are working on cost estimates for your customer. She has asked you about pricing of objects stored in S3. There are currently objects in the buckets ranging from 0 bytes to over 1 GB. In this situation, what is the smallest file size that S3-IA will charge you for? 1 byte 1 MB 0 bytes 128 KB</p>
<p>D. This is a bit of a trick question if you’re not careful. While S3 allows for 0-byte objects, and charges as such, S3-IA charges all objects as if they are at least 128 KB in size. So while you can store a smaller object in S3-IA, it will be considered 128 KB for pricing and charging purposes.</p>
<p>Which of the following would you use for setting up AMIs from which new instances are created in an Auto Scaling policy?
The Auto Scaling policy itself
The security group for the Auto Scaling policy
The Auto Scaling group used by the Auto Scaling policy
The launch configuration used by the Auto Scaling policy</p>
<p>D. Launch configurations are where details are specified for creating (launching) new instances (option D). Security groups have to do more with what traffic is allowed into and out of the launched instances. The remaining two options—A and C—don’t make sense in this context.</p>
<p>You terminate an EC2 instance and find that the EBS root volume that was attached to the instance was also deleted. How can you correct this?
You can’t. A root volume is always deleted when the EC2 instance attached to that volume is deleted.
Take a snapshot of the EBS volume while the EC2 instance is running. Then, when the EC2 instance is terminated, you can restore the EBS volume from the snapshot.
Remove termination protection from the EC2 instance.
Use the AWS CLS to change the DeleteOnTermination attribute for the EBS volume to “false.”</p>
<p>D. By default, EBS root volumes are terminated when the associated instance is terminated. However, this is only the default value; therefore A is not correct. Option B is not directly addressing the question; the EBS volume would still be deleted even if you take a snapshot. Option C is not relevant, but option D is: You can use the AWS CLI (or the console) to set the root volume to persist after instance termination.</p>
<p>Can you attach an EBS volume to more than one EC2 instance at the same time?
Yes, as long as the volume is not the root volume.
No, EBS volumes cannot be attached to more than one instance at the same time.
Yes, as long as the volume is one of the SSD classes and not magnetic storage.
Yes, as long as at least one of the instances uses the volume as its root volume.</p>
<p>B. EBS volumes can only attach to a single instance at one time. The other options are all simply to distract.</p>
<p>How does AWS allow you to add metadata to your EC2 instances? (Choose two.) Certificates Tags Policies Labels</p>
<p>A, B. All instances and most services in AWS provide tagging for metadata. Certificates are related to SSL and help define the identity of a site or transmission, policies are related to permissions and roles, and labels are not (currently) an AWS construct.</p>
<p>Which of the following can be deployed across availability zones? Cluster placement groups Placement groups Spread placement groups Cross-region placement groups</p>
<p>C. Spread placement groups—which are relatively new to AWS—can be placed across multiple availability zones. Cluster placement groups cannot, and placement groups generally refers to cluster placement groups. Cross-region placement groups is a made-up term.</p>
<p>You are tasked with recommending a storage solution for a large company with a capital investment in an NFS-based backup system. The company wants to investigate cloud-based storage but doesn’t want to lose its software investment either. Which type of storage gateway would you recommend? File gateway Cached volume gateway Stored volume gateway Tape gateway</p>
<p>A. Each of the options is a valid configuration for a storage gateway. Of the options, file gateway provides an NFS-style protocol for transferring data to and from the gateway and therefore is the best option.</p>
<p>You are tasked with prototyping a cloud-based storage solution for a small business. The business’s chief concern is low network latency, as its systems need near-instant access to all of its datasets. Which storage gateway would you recommend? File gateway Cached volume gateway Stored volume gateway Tape gateway</p>
<p>C. A stored volume gateway stores data at the on-premises data store and backs up to S3 asynchronously to support disaster recovery. Most important, though, is that by storing data locally, network latency is minimal. Of the available options, only a stored volume gateway provides local data with this speed of access across an entire dataset.</p>
<p>You are the solutions architect for a mapping division that has inherited a massive geospatial dataset from a recent acquisition. The data is all on local disk drives, and you want to transition the data to AWS. With datasets of over 10 TB, what is the best approach to getting this data into AWS? S3 with Transfer Acceleration Cached volume gateway Snowball Shipping the drives to AWS</p>
<p>C. Anytime very large data needs to be moved into AWS, consider Snowball. Snowball is a physical device that allows for data to be physically sent to AWS rather than transferred over a network. It is the only solution that will not potentially cause disruptive network outages or slowdowns.</p>
<p>Which of the following are not reasons to use a cached volumes storage gateway? (Choose two.)
You want low-latency access to your entire dataset.
You want to reduce the cost of on-site storage.
You want to support iSCSI storage volumes.
You want low-latency access to your most commonly accessed data.</p>
<p>A, C. A cached volume gateway stores the most commonly accessed data locally (option D) while keeping the entire dataset in S3. This has the effect of reducing the cost of storage on-site, because you need less (option B). Since both of these are true, you need to select the other two options as reasons to not use a cached volumes gateway: A and C.</p>
<p>Which of the following storage gateway options is best for traditional backup applications? File gateway Cached volume gateway Stored volume gateway Tape gateway</p>
<p>A. Be careful here. While it might seem at a glance that a tape gateway is best, most backup solutions do not employ tape backups. They use NFS mounts and file-based backups, which is exactly what a file gateway is best used for.</p>
<p>Which of the following storage gateway options is best for applications where latency of your entire dataset is the priority? File gateway Cached volume gateway Stored volume gateway Tape gateway</p>
<p>C. If the entire dataset is needed, then a stored volume gateway is a better choice than a cached volume gateway. The stored volume stores the entire dataset on premises and therefore is very fast for all data access.</p>
<p>Which of the following storage gateway options is best for reducing the costs associated with an off-site disaster recovery solution? File gateway Cached volume gateway Stored volume gateway Tape gateway</p>
<p>D. A tape gateway is ideal for replacing off-site tape directories. The gateway is a virtual tape directory and avoids the costs of transporting actual tapes to an expensive off-site location.</p>
<p>For which of the following storage classes do you need to specify an availability zone? S3 S3-IA S3 One Zone-IA None of the above</p>
<p>D. While S3 does use availability zones to store objects in buckets, you do not choose the availability zone yourself. Even S3 One Zone-IA does not allow you to specify the AZ for use.</p>
<p>creases)? (Choose two.)
S3 will scale to handle the load if you have Auto Scaling set up.
S3 will scale automatically to ensure your service is not interrupted.
Scale spreads evenly across AWS network to minimize the effect of a spike.
A few instances are scaled up dramatically to minimize the effect of the spike.</p>
<p>B, C. S3 is built to automatically scale in times of heavy application usage. There is no requirement to enable Auto Scaling (A); rather, this happens automatically (so B is correct). Further, S3 tends to scale evenly across the AWS network (C). Option D is the opposite of what AWS intends.</p>
<p>You have been tasked with helping a company migrate its expensive off-premises storage to AWS. It will still primarily back up files from its on-premises location to a local NAS. These files then need to be stored off-site (in AWS rather than the original off-site location). The company is concerned with durability and cost and wants to retain quick access to its files. What should you recommend?
Copying files from the NAS to an S3 standard class bucket
Copying files from the NAS to an S3 One Zone-IA class bucket
Copying the files from the NAS to EBS volumes with provisioned IOPS
Copying the files from the NAS to Amazon Glacier</p>
<p>B. When evaluating S3 storage, all storage classes have the same durability. For cost, though, S3 One Zone-IA is the clear winner. Only Glacier is potentially less expensive but does not provide the same quick file access that S3 One Zone-IA does.</p>
<p>How many S3 buckets can you create per AWS account, by default? 25 50 100 There is not a default limit.</p>
<p>C. By default, all AWS accounts can create up to 100 buckets. However, this limit can easily be raised by AWS if you request an upgrade.</p>
<p>How are objects uploaded to S3 by default?
In parts
In a single operation
You must configure this option for each S3 bucket explicitly.
Via the REST API</p>
<p>B. S3 uploads are, by default, done via a single operation, usually via a single PUT operation. AWS suggests that you can upload objects up to 100 MB before changing to Multipart Upload.</p>
<p>Which of the following are the ways you should consider using Multipart Upload?
For uploading large objects over a stable high-bandwidth network to maximize bandwidth
For uploading large objects to reduce the cost of ingress related to those objects
For uploading any size files over a spotty network to increase resiliency
For uploading files that must be appended to existing files</p>
<p>A, C. Multipart Upload is, as should be the easiest answer, ideal for large objects on stable networks (A). But it also helps handle less-reliable networks as smaller parts can fail while others get through, reducing the overall failure rate (C). There is no cost associated with data ingress (B), and D doesn’t make much sense at all!</p>
<p>How is a presigned URL different from a normal URL? (Choose two.)
A presigned URL has permissions associated with certain objects provided by the creator of the URL.
A presigned URL has permissions associated with certain objects provided by the user of the URL.
A presigned URL allows access to private S3 buckets without requiring AWS credentials.
A presigned URL includes encrypted credentials as part of the URL.</p>
<p>A, C. Presigned URLs are created to allow users without AWS credentials to access specific resources (option C). And it’s the creator of the URL (option A) that assigns these permissions, rather than the user (option B). Finally, these credentials are associated with the URL but are not encrypted into the URL itself.</p>
<p>How long is a presigned URL valid? 60 seconds 60 minutes 24 hours As long as it is configured to last</p>
<p>D. A presigned URL is always configured at creation for a valid Time to Live (often referred to as TTL). This time can be very short, or quite long.</p>
<p>Which of the following behaviors is consistent with how S3 handles object operations on a bucket?
A process writes a new object to Amazon S3 and immediately lists keys within its bucket. The new object does not appear in the list of keys.
A process deletes an object, attempts to immediately read the deleted object, and S3 still returns the deleted data.
A process deletes an object and immediately lists the keys in the bucket. S3 returns a list with the deleted object in the list.
All of the above</p>
<p>D. These are all consistent with S3 behavior. Option A could occur as the new object is being propagated to additional S3 buckets. B and C could occur as a result of eventual consistency, where a DELETE operation does not immediately appear.</p>
<p>Which of the following storage media are object based? (Choose two.) S3-IA EBS EFS S3 standard</p>
<p>A, D. All S3 storage classes are object-based, while EBS and EFS are block-based.</p>
<p>How many PUTs per second does S3 support? 100 1500 3500 5000</p>
<p>C. This is important because it reflects a recent change by AWS. Until 2018, there was a hard limit on S3 of 100 PUTs per second, but that limit has now been raised to 3500 PUTs per second.</p>
<p>What unique domain name do S3 buckets created in US East (N. Virginia) have, as compared to other regions? s3.amazonaws.com s3-us-east-1.amazonaws.com s3-us-east.amazonaws.com s3-amazonaws.com</p>
<p>A. S3 buckets have names based upon the S3 identifier (s3), the region (us-east-1 in this case), and the amazonaws.com domain. Then, the bucket name appears after the domain. That results in a URL like https://s3-us-east-1.amazonaws.com/prototypeBucket32. However, buckets in US East are a special case and should use the special, unique endpoint s3.amazonaws.com (option A).</p>
<p>Which of the following are valid domain names for S3 buckets? (Choose two.)s3.us-east-1.amazonaws.com
s3-us-west-2.amazonaws.com
s3.amazonaws.com
s3-jp-west-2.amazonaws.com</p>
<p>B, C. Option A is not the correct format; s3 should be separated from the region with a dash (-). Option B is valid, and option C is the correct unique URL for US East (N. Virginia). Option D is the right format, but jp-west-2 is not an AWS region.</p>
<p>What are the two styles of URLs that AWS supports for S3 bucket access? (Choose two.) Virtual-hosted-style URLs Domain-hosted-style URLs Apex zone record URLs Path-style URLs</p>
<p>A, D. S3 supports two styles of bucket URLs: virtual-hosted-style and path-style URLs. Virtual-hosted-style URLs are of the form http://bucket.s3-aws-region.amazonaws.com, and path-style URLs are the traditional URLs you’ve seen: https://s3-aws-region.amazonaws.com/bucket-name.</p>
<p>Which of the following are not true about S3? (Choose two.)
Buckets are created in specific regions.
Bucket names exist in a per-region namespace.
Buckets are object-based.
Each S3 bucket stores up to 5 TB of object data.</p>
<p>B, D. While S3 buckets are created in a specific region (A), the names of buckets are global and must exist in a global namespace (so B is untrue). Buckets are object-based (so C is true), and while a single object is limited at 5 TB, the buckets are unlimited in total storage capacity (so D is false).</p>
<p>Which of the following is the best approach to ensuring that objects in your S3 buckets are not accidentally deleted?
Restrictive bucket permissions
Enabling versioning on buckets
Enabling MFA Delete on buckets
All of these options are equally useful.</p>
<p>C. MFA Delete is the absolute best means of ensuring that objects are not accidentally deleted. MFA—Multi-Factor Authentication—ensures that any object deletion requires multiple forms of authentication.</p>
<p>What HTTP request header is used by MFA Delete requests? x-delete x-amz-mfa x-aws-mfa x-amz-delete</p>
<p>B. All Amazon-specific request headers begin with x-amz. This is important to remember as it will help eliminate lots of incorrect answers. This leaves only x-amz-mfa.</p>
<p>Which of the following operations will take advantage of MFA Delete, if it is enabled? (Choose two.)
Deleting an S3 bucket
Changing the versioning state of a bucket
Permanently deleting an object version
Deleting an object’s metadata</p>
<p>B, C. MFA Delete applies to deleting objects, not buckets (so option A is incorrect). It affects changing the versioning state of a bucket or permanently deleting any object (or a version of that object); this makes B and C correct. Deleting an object’s metadata while leaving the object intact does not require MFA Delete.</p>
<p>When using an MFA Delete–enabled bucket to delete an object, from where does the authentication code come?
A hardware or virtual MFA device
The token section of the AWS console
The AWS REST API under delete-codes in a bucket’s metadata
None of these</p>
<p>A. This answer simply has to be memorized. MFA Delete authentication codes are pulled from hardware or virtual MFA devices, like Google Authenticator on an iPhone.</p>
<p>Who can enable versioning on an S3 bucket?
All authorized IAM users of the bucket
A, C, and D
The bucket owner
The root account that owns the bucket</p>
<p>B. The bucket owner, root account, and all authorized IAM users of a bucket are allowed to enable versioning.</p>
<p>Which of the following exist and are attached to an object stored in S3? (Choose two.) Metadata Data Authentication ID Version history</p>
<p>A, B. Each object in S3 has a name, value (data), version ID, and metadata. The version history of an object won’t exist unless versioning is turned on, so it’s not always a valid answer.</p>
<p>CloudFront is a web service for distributing what type of content? (Choose two.)
Object-based storage
Static files
Script-generated or programmatically generated dynamic content
All of the above</p>
<p>B, C. CloudFront is intended to cache and deliver static files from your origin servers to users or clients. Dynamic content is also servable through CloudFront from EC2 or other web servers. Object-based storage doesn’t make sense in this context, as CloudFront is a distribution mechanism, not a storage facility.</p>
<p>What are the sources of information that CloudFront serves data from called? Service providers Source servers Static servers Origin servers</p>
<p>D. CloudFront serves content from origin servers, usually static files and dynamic responses. These origin servers are often S3 buckets for static content and EC2 instances for dynamic content.</p>
<p>Which of the following are typical origin servers for a CloudFront distribution? (Choose two.) EC2 instances Amazon Glacier archives API Gateway S3 buckets</p>
<p>A, D. CloudFront serves content from origin servers, usually static files and dynamic responses. These origin servers are often S3 buckets for static content and EC2 instances for dynamic content (options A and D).</p>
<p>Which of the following are not origin servers for a CloudFront distribution? (Choose two.) Docker containers running on ECS MySQL ResultSet S3 buckets Redshift workloads</p>
<p>B, D. CloudFront serves content from origin servers, usually static files and dynamic responses. These origin servers are often S3 buckets for static content and EC2 instances for dynamic content (meaning option C is valid). Containers can also be used in place of EC2 instances, making option A valid as well. This leaves B and D as invalid origin servers.</p>
<p>Which of the following are not origin servers for a CloudFront distribution? (Choose two.) Elastic load balancer Route 53 recordsets SQS subscription endpoint SNS topic retrieval endpoint</p>
<p>C, D. CloudFront is able to distribute content from an ELB, rather than directly interfacing with S3, and can do the same with a Route 53 recordset. These allow the content to come from multiple instances. This means that options C and D are invalid origin servers and therefore the correct answers.</p>
<p>What is a collection of edge locations called? Region Availability zone CloudFront Distribution</p>
<p>D. A CloudFront distribution is a collection of edge locations across the world.</p>
<p>Which of the following store content that is served to users in a CloudFront-enabled web application? (Choose two.) Availability zones Edge locations Route 53 EC2 instances</p>
<p>B, D. Availability zones are not content storage devices; they are virtual data centers. Edge locations are used by CloudFront distributions to store cached content (so correct). Route 53 is the Amazon DNS service. EC2 instances can serve content from processes (so also correct).</p>
<p>You support a web application that uses a CloudFront distribution. A banner ad that was posted the previous night at midnight has an error in it, and you’ve been tasked with removing the ad so that users don’t see the error. What steps should you take? (Choose two.)
Delete the banner image from S3.
Remove the ad from the website.
Wait for 24 hours and the edge locations will automatically expire the ad from their caches.
Clear the cached object manually.</p>
<p>B, D. You must perform both steps B and D, and you must perform B before D or the banner ad could get re-cached. Also note that expiring a cached object manually incurs a cost.</p>
<p>By default, how long do edge locations cache objects? 12 hours 24 hours 48 hours 360 minutes</p>
<p>B. The default TTL for edge locations is 24 hours.</p>
<p>How are datasets utilized by stored volumes backed up to S3?
Asynchronously
Synchronously
The backup method is specified by the user at configuration time.
Synchronously unless the backup takes more than 2 seconds; then the backup switches to asynchronous</p>
<p>A. All data is backed up to S3 asynchronously when a stored volume is used. This ensures that no lag is incurred by clients that interact with the stored volumes on-site.</p>
<p>When should you use AWS Direct Connect instead of Snowball?
AWS Direct Connect is usually a better option than Snowball.
AWS Direct Connect is almost never a better option than Snowball.
If you have more than 50 TB of data to transfer, use Snowball.
If you have less than 50 TB of data to transfer, use Snowball.</p>
<p>A. AWS Direct Connect is a dedicated high-speed connection between your on-premises network and AWS. Because of this, a direct connect is almost always a better choice than shipping out a Snowball, loading data to it, and then shipping it back.</p>
<p>Which of the following AWS services can be used to store large objects? (Choose two.) Redshift S3 Oracle EC2</p>
<p>B, C. This is a little tricky. S3 is an obvious choice. Redshift is suited for analysis data, so probably not large objects. EC2 is compute, which leaves Oracle. It is possible—without any better answers—to use Oracle (via RDS or installed on EC2) to store large objects in a BLOB-type field.</p>
<p>You have created a static website and posted an HTML page as home.html in the root level of your S3 bucket. The bucket is named californiaroll and is located in US West 2. At what URL can you access the HTML page?
http: //californiaroll.s3-website.us-west-1.amazonaws.com/home.html
http: //s3-website-us-west-1.amazonaws.com/californiaroll/home.html
http: //californiaroll.s3-website-us-west-2.amazonaws.com/public_html/home.html
http: //californiaroll.s3-website-us-west-1.amazonaws.com/home.html</p>
<p>D. First, ensure that the domain name is correct. Option A incorrectly separates s3-website from the region, and C has the wrong region. B does not have the bucket name in the URL, which it should for website hosting. This leaves D, the correct answer.</p>
<p>You have a variety of images with names like image-001.jpg and image-002.jpg in an S3 bucket named phoneboothPhotos created in the EU West 1 region. You have enabled website hosting on this bucket. Which URL would allow access to the photos?
http://phoneboothPhotos.s3-website-eu-west-1.amazonaws.com/ phoneboothPhotos/image-001.jpg
http: //phoneboothPhotos.s3-website-eu-west-1.amazonaws.com/ phoneboothphotos/image-001.jpg
http: //phoneboothPhotos.s3-website-eu-west-1.amazonaws.com/ public_html/phoneboothPhotos/image-001.jpg
http: //phoneboothPhotos.s3-website.eu-west-1.amazonaws.com/ phoneboothPhotos/image-001.jpg</p>
<p>A. First, eliminate option D; the domain is incorrect, adding a separator between s3-website and the region. Then, eliminate option C, as it adds a public_html to the portion of the URL after the domain, which is also incorrect. This leaves A and B. Here, you need to realize that the portion of a URL after the domain is case sensitive and compare the two directories to the question. A is correct, using the correct capitalization of phoneboothPhotos.</p>
<p>You have your own custom domain and want to host a static website on that domain. You also want to minimize compute costs. Which of the following AWS services would you use to host your website on your custom domain? (Choose two.) S3 EC2 Lambda Route 53</p>
<p>A, D. To minimize compute resources, you should avoid EC2 and Lambda. Enabling static website hosting on an S3 bucket is a better option. To use a custom domain, you’d need to also use Route 53 to direct traffic from your custom domain to the S3 bucket.</p>
<p>Which of the following does Elastic Beanstalk provide? (Choose two.) Deployment of code Security Capacity provisioning Cost optimization</p>
<p>A, C. Elastic Beanstalk is focused on code deployment. It provides that, and in the process, load balancing, Auto Scaling, health monitoring, and capacity provisioning (C).</p>
<p>Which of the following does Elastic Beanstalk not provide? (Choose two.) Deployment of code Security hardening Application health monitoring Log inspection and backup</p>
<p>B, D. Elastic Beanstalk is focused on code deployment (A). It provides that, and in the process, load balancing, Auto Scaling, health monitoring (C), and capacity provisioning. It does not provide security or log inspection.</p>
<p>Which of the following does Elastic Beanstalk support? (Choose two.) Docker C++ Scala Node.js</p>
<p>A, D. This is a little far off the beaten AWS path, but you should know which languages and technologies are commonly used and cited by AWS and which are not. In general, Docker and containers are always supported; and Node.js, JavaScript, Java, PHP, and Perl are commonly supported. C++ and Scala are not in that list.</p>
<p>Which AWS service allows you to run code without provisioning any of the underlying resources required by that code? EC2 ECS DynamoDB Lambda</p>
<p>D. EC2 and ECS are compute services but require knowledge and working with the required resources. DynamoDB is a database and cannot run code. Lambda is correct: It runs code without needing an underlying set of compute resources that are user managed.</p>
<p>Which of the following AWS services allow you to run code without worrying about provisioning specific resources for that code? (Choose two.) Elastic Beanstalk ECS DynamoDB Lambda</p>
<p>A, D. Elastic Beanstalk and Lambda are very different services, but in this context, both are valid answers. Elastic Beanstalk is a sort of “code deployment wizard,” and Lambda allows for serverless code deployment. Both handle provisioning of the environment without user intervention.</p>
<p>Which of the following languages work on Lambda? (Choose two.) JavaScript Node.js Scala C++</p>
<p>A, B. You should know which languages and technologies are commonly used and cited by AWS and which are not. In general, Node.js, JavaScript, Java, PHP, and Perl are pretty commonly supported. C++ and Scala are not in that list.</p>
<p>What AWS service is ideal for gathering business intelligence from multiple data sources? Lightsail QuickSight CloudTrail RDS</p>
<p>B. QuickSight is a cloud-powered business analytics service. It provides visualizations and analysis from multiple data sources.</p>
<p>Which service would you use to create a single-sign on system for a user base that already has credentials they want to use outside of AWS? Cognito Kinesis SWF IAM</p>
<p>A. AWS Cognito allows you to add user sign-up, sign-in, and access control to web applications, as well as single sign-on. It also allows identity providers such as Facebook and Google to be used.</p>
<p>What type of services are associated with S3 lifecycle management? Storage services Database services Compute services Networking services</p>
<p>A. Anything related to S3 is going to be storage-related. In this case, lifecycle management handles transitioning data from one S3 storage class to another.</p>
<p>What type of services are associated with Amazon Lightsail? Storage services Networking services Compute services All of the above</p>
<p>D. Amazon Lightsail is a compute solution for web applications and involves compute, storage, and networking as well as database storage when needed. It launches servers and configures them with the needed services for web hosting. Note that while AWS considers Lightsail a compute service, it absolutely interfaces and controls additional resources.</p>
<p>What type of services are associated with Elastic Beanstalk? Storage services Networking services Compute services All of the above</p>
<p>C. Elastic Beanstalk is an Amazon service that spins up and manages a number of other services, in particular, compute. Even though you can configure other services, though, Beanstalk is considered to primarily be a code deployment tool and therefore is focused on compute services.</p>
<p>What type of services are associated with Redshift? Storage services Networking services Database services All of the above</p>
<p>C. Redshift is one of AWS’s OLAP (online analytics processing) tools and is a database service. While it does processing, it is primarily intended to receive large amounts of data and operate upon that data, as a database would (in loose terms).</p>
<p>What type of services are associated with CloudFront? Storage services Networking services Compute services Both B and C</p>
<p>B. CloudFront is AWS’s distribution network. It’s a content caching system that is ultimately a networking component of your AWS buildout.</p>
<p>What type of services are associated with EMR? Storage services Analytic services Compute services Networking services</p>
<p>B. EMR is Elastic MapReduce and provides data processing and analysis of large datasets.</p>
<p>What type of services are associated with Cloud9? Storage services Analytic services Developer services Networking services</p>
<p>C. Cloud9 is a developer environment, intended as an IDE for AWS developers.</p>
<p>What type of services are associated with Direct Connect? Storage services Analytic services Developer services Networking services</p>
<p>D. Direct Connect is an AWS service for creating a high-speed connection between an on-premises site and AWS.</p>
<p>What type of services are associated with Workspaces? Mobile services Analytic services Developer services Desktop services</p>
<p>D. Amazon Workspaces allows you to provide a desktop service via the cloud. The service allows people throughout the world to take advantage of scalable desktop provisioning.</p>
<p>What type of services are associated with Kinesis? Mobile services Analytic services Developer services Desktop services</p>
<p>B. Kinesis is a data analytic service capable of handling large data streams and providing real-time insights.</p>
<p>What type of services are associated with OpsWorks? Mobile services Analytic services Media services Management services</p>
<p>D. OpsWorks is an operational management service, which AWS often classifies as “management tools” (especially in the AWS console). It allows integration with tools like Puppet and Chef.</p>
<p>Which of the following statements are true about availability zones? (Choose two.)
An elastic IP is always tied to one specific availability zone.
A region always contains two availability zones.
An availability zone’s name (for example, us-east-1a) may change across AWS accounts.
You can specify an availability zone in which to launch your instance when creating the instance.</p>
<p>C, D. Elastic IPs are assigned to an instance in a specific availability zone, but in the event of a failure, that elastic IP can be remapped to another AZ, making A false. B is false because regions will contain at least two availability zones, not exactly two. C is true, as different accounts may remap AZs to different names to ensure better resource distribution, and D is correct, even though many users simply accept the defaults and don’t pick a specific AZ.</p>
<p>Which of the following are actual region identifiers within AWS? (Choose two.) us-east-2 jp-south-2 ap-south-1 uk-west-1</p>
<p>A, C. This is admittedly a tough question, but worth working through. You need to have at least a familiarity with AWS regions and know that there are several major regions: US, EU, and AP. There are a few others (CA, SA, for example), but the major ones are US, EU, and AP. Knowing those, you can spot that A and C are likely valid. JP (presumably for Japan) isn’t correct, and UK you should recognize should be EU. There is no UK-specific region.</p>
<p>Which of the following is a valid availability zone identifier within AWS? us-east-2b eu-west-1 us-west-az-1 az-sa-east-1a</p>
<p>A. An availability zone identifier is the region identifier with a letter appended on the end. A region identifier is the region name, which is usually the country or area (eu, us, etc.), then the geographical area (southeast, west, east, etc.), then a number.</p>
<p>Which AWS service functions like a NAS in the cloud? EBS Tape gateway EFS DynamoDB</p>
<p>C. EFS, Elastic File System, provides scalable storage accessible from multiple compute instances. EBS is Elastic Block Storage and is tied to one instance at a time and therefore not like a NAS (network attached storage). DynamoDB is a NoSQL database, and tape gateway is a client device for interacting with S3, but locally rather than in the cloud.</p>