Part 1 - Resilient architecture Flashcards
A small business specializing in video processing wants to prototype cloud storage in order to lower its costs. However, management is wary of storing its client files in the cloud rather than on premises. They are focused on cost savings and experimenting with the cloud at this time. What is the best solution for their prototype? A) Install a VPN, set up an S3 bucket for their files created within the last month, and set up an additional S3-IA bucket for older files. Create a lifecycle policy in S3 to move files older than 30 days into the S3-IA bucket nightly. B) Install an AWS storage gateway using stored volumes. C) Set up a Direct Connect and back all local hard drives up to S3 over the Direct Connect nightly. D)Install an AWS storage gateway using cached volumes.
B. Anytime the primary consideration is storage with a local data presence—where data must be stored or seen to be stored locally—a storage gateway gives you the best option. This reduces the choices to B and D. B will store the files in S3 and provide local cached copies, while D will store the files locally and push them to S3 as a backup. Since management is concerned about storage in the cloud of primary files, B is the best choice; local files are the primary source of data, while still allowing the company to experiment with cloud storage without “risking” its data being stored primarily in the cloud.
For which of the following HTTP methods does S3 have eventual consistency? (Choose two.) PUTs of new objects UPDATEs DELETEs PUTs that overwrite existing objects
C, D. PUTs of new objects have a read after write consistency. DELETEs and overwrite PUTs have eventual consistency across S3.
What is the smallest file size that can be stored on standard class S3? 1 byte 1 MB 0 bytes 1 KB
C. First, note that “on standard class S3” is a red herring, and irrelevant to the question. Second, objects on S3 can be 0 bytes. This is equivalent to using touch on a file and then uploading that 0-byte file to S3.
You’ve just created a new S3 bucket named ytmProfilePictures in the US East 2 region and created a folder at the root level of the bucket called images/. You’ve turned on website hosting and asked your content team to upload images into the images/ folder. At what URL will these images be available through a web browser? https://s3-us-east-2.amazonaws.com/ytmProfilePictures/images https://s3-website-us-east-2.amazonaws.com/ytmProfilePictures/images https://ytmProfilePictures.s3-website-us-east-2.amazonaws.com/images https://ytmProfilePictures.s3-website.us-east-2.amazonaws.com/images
You’ve just created a new S3 bucket named ytmProfilePictures in the US East 2 region and created a folder at the root level of the bucket called images/. You’ve turned on website hosting and asked your content team to upload images into the images/ folder. At what URL will these images be available through a web browser? https://s3-us-east-2.amazonaws.com/ytmProfilePictures/images https://s3-website-us-east-2.amazonaws.com/ytmProfilePictures/images https://ytmProfilePictures.s3-website-us-east-2.amazonaws.com/images https://ytmProfilePictures.s3-website.us-east-2.amazonaws.com/images
Which of the following statements is not true? Standard S3, S3-IA, and S3 One Zone-IA all are equally durable. The availability of S3-IA and S3 One Zone-IA are identical. Standard S3, S3-IA, and S3 One Zone-IA all have different availabilities. S3 One Zone-IA is as durable as standard S3.
B. This is an important distinction when understanding S3 classes. Standard S3, S3-IA, and S3 One Zone-IA all are equally durable, although in One Zone-IA, data will be lost if the availability zone is destroyed. Each class has different availability, though: S3 is 99.99, S3-IA is 99.9, and S3 One Zone-IA is 99.5. Therefore, it is false that all have the same availability (B).
Which of the following AWS services appear in the AWS console across all regions? (Choose two.) S3 EC2 IAM RDS
A, C. The wording of this question is critical. S3 buckets are created within a region, but the AWS console and your account will show you all S3 buckets at all times. While a bucket is created in a specific region, names of buckets are also global. IAM permissions are also global and affect all regions. RDS and EC2 instances are region specific, and only appear in the regions in which they were created in the AWS console.
You have an S3 bucket and are working on cost estimates for your customer. She has asked you about pricing of objects stored in S3. There are currently objects in the buckets ranging from 0 bytes to over 1 GB. In this situation, what is the smallest file size that S3-IA will charge you for? 1 byte 1 MB 0 bytes 128 KB
D. This is a bit of a trick question if you’re not careful. While S3 allows for 0-byte objects, and charges as such, S3-IA charges all objects as if they are at least 128 KB in size. So while you can store a smaller object in S3-IA, it will be considered 128 KB for pricing and charging purposes.
Which of the following would you use for setting up AMIs from which new instances are created in an Auto Scaling policy? The Auto Scaling policy itself The security group for the Auto Scaling policy The Auto Scaling group used by the Auto Scaling policy The launch configuration used by the Auto Scaling policy
D. Launch configurations are where details are specified for creating (launching) new instances (option D). Security groups have to do more with what traffic is allowed into and out of the launched instances. The remaining two options—A and C—don’t make sense in this context.
You terminate an EC2 instance and find that the EBS root volume that was attached to the instance was also deleted. How can you correct this? You can’t. A root volume is always deleted when the EC2 instance attached to that volume is deleted. Take a snapshot of the EBS volume while the EC2 instance is running. Then, when the EC2 instance is terminated, you can restore the EBS volume from the snapshot. Remove termination protection from the EC2 instance. Use the AWS CLS to change the DeleteOnTermination attribute for the EBS volume to “false.”
D. By default, EBS root volumes are terminated when the associated instance is terminated. However, this is only the default value; therefore A is not correct. Option B is not directly addressing the question; the EBS volume would still be deleted even if you take a snapshot. Option C is not relevant, but option D is: You can use the AWS CLI (or the console) to set the root volume to persist after instance termination.
Can you attach an EBS volume to more than one EC2 instance at the same time? Yes, as long as the volume is not the root volume. No, EBS volumes cannot be attached to more than one instance at the same time. Yes, as long as the volume is one of the SSD classes and not magnetic storage. Yes, as long as at least one of the instances uses the volume as its root volume.
B. EBS volumes can only attach to a single instance at one time. The other options are all simply to distract.
How does AWS allow you to add metadata to your EC2 instances? (Choose two.) Certificates Tags Policies Labels
A, B. All instances and most services in AWS provide tagging for metadata. Certificates are related to SSL and help define the identity of a site or transmission, policies are related to permissions and roles, and labels are not (currently) an AWS construct.
Which of the following can be deployed across availability zones? Cluster placement groups Placement groups Spread placement groups Cross-region placement groups
C. Spread placement groups—which are relatively new to AWS—can be placed across multiple availability zones. Cluster placement groups cannot, and placement groups generally refers to cluster placement groups. Cross-region placement groups is a made-up term.
You are tasked with recommending a storage solution for a large company with a capital investment in an NFS-based backup system. The company wants to investigate cloud-based storage but doesn’t want to lose its software investment either. Which type of storage gateway would you recommend? File gateway Cached volume gateway Stored volume gateway Tape gateway
A. Each of the options is a valid configuration for a storage gateway. Of the options, file gateway provides an NFS-style protocol for transferring data to and from the gateway and therefore is the best option.
You are tasked with prototyping a cloud-based storage solution for a small business. The business’s chief concern is low network latency, as its systems need near-instant access to all of its datasets. Which storage gateway would you recommend? File gateway Cached volume gateway Stored volume gateway Tape gateway
C. A stored volume gateway stores data at the on-premises data store and backs up to S3 asynchronously to support disaster recovery. Most important, though, is that by storing data locally, network latency is minimal. Of the available options, only a stored volume gateway provides local data with this speed of access across an entire dataset.
You are the solutions architect for a mapping division that has inherited a massive geospatial dataset from a recent acquisition. The data is all on local disk drives, and you want to transition the data to AWS. With datasets of over 10 TB, what is the best approach to getting this data into AWS? S3 with Transfer Acceleration Cached volume gateway Snowball Shipping the drives to AWS
C. Anytime very large data needs to be moved into AWS, consider Snowball. Snowball is a physical device that allows for data to be physically sent to AWS rather than transferred over a network. It is the only solution that will not potentially cause disruptive network outages or slowdowns.
Which of the following are not reasons to use a cached volumes storage gateway? (Choose two.) You want low-latency access to your entire dataset. You want to reduce the cost of on-site storage. You want to support iSCSI storage volumes. You want low-latency access to your most commonly accessed data.
A, C. A cached volume gateway stores the most commonly accessed data locally (option D) while keeping the entire dataset in S3. This has the effect of reducing the cost of storage on-site, because you need less (option B). Since both of these are true, you need to select the other two options as reasons to not use a cached volumes gateway: A and C.
Which of the following storage gateway options is best for traditional backup applications? File gateway Cached volume gateway Stored volume gateway Tape gateway
A. Be careful here. While it might seem at a glance that a tape gateway is best, most backup solutions do not employ tape backups. They use NFS mounts and file-based backups, which is exactly what a file gateway is best used for.
Which of the following storage gateway options is best for applications where latency of your entire dataset is the priority? File gateway Cached volume gateway Stored volume gateway Tape gateway
C. If the entire dataset is needed, then a stored volume gateway is a better choice than a cached volume gateway. The stored volume stores the entire dataset on premises and therefore is very fast for all data access.
Which of the following storage gateway options is best for reducing the costs associated with an off-site disaster recovery solution? File gateway Cached volume gateway Stored volume gateway Tape gateway
D. A tape gateway is ideal for replacing off-site tape directories. The gateway is a virtual tape directory and avoids the costs of transporting actual tapes to an expensive off-site location.
For which of the following storage classes do you need to specify an availability zone? S3 S3-IA S3 One Zone-IA None of the above
D. While S3 does use availability zones to store objects in buckets, you do not choose the availability zone yourself. Even S3 One Zone-IA does not allow you to specify the AZ for use.
creases)? (Choose two.) S3 will scale to handle the load if you have Auto Scaling set up. S3 will scale automatically to ensure your service is not interrupted. Scale spreads evenly across AWS network to minimize the effect of a spike. A few instances are scaled up dramatically to minimize the effect of the spike.
B, C. S3 is built to automatically scale in times of heavy application usage. There is no requirement to enable Auto Scaling (A); rather, this happens automatically (so B is correct). Further, S3 tends to scale evenly across the AWS network (C). Option D is the opposite of what AWS intends.
You have been tasked with helping a company migrate its expensive off-premises storage to AWS. It will still primarily back up files from its on-premises location to a local NAS. These files then need to be stored off-site (in AWS rather than the original off-site location). The company is concerned with durability and cost and wants to retain quick access to its files. What should you recommend? Copying files from the NAS to an S3 standard class bucket Copying files from the NAS to an S3 One Zone-IA class bucket Copying the files from the NAS to EBS volumes with provisioned IOPS Copying the files from the NAS to Amazon Glacier
B. When evaluating S3 storage, all storage classes have the same durability. For cost, though, S3 One Zone-IA is the clear winner. Only Glacier is potentially less expensive but does not provide the same quick file access that S3 One Zone-IA does.
How many S3 buckets can you create per AWS account, by default? 25 50 100 There is not a default limit.
C. By default, all AWS accounts can create up to 100 buckets. However, this limit can easily be raised by AWS if you request an upgrade.
How are objects uploaded to S3 by default? In parts In a single operation You must configure this option for each S3 bucket explicitly. Via the REST API
B. S3 uploads are, by default, done via a single operation, usually via a single PUT operation. AWS suggests that you can upload objects up to 100 MB before changing to Multipart Upload.
Which of the following are the ways you should consider using Multipart Upload? For uploading large objects over a stable high-bandwidth network to maximize bandwidth For uploading large objects to reduce the cost of ingress related to those objects For uploading any size files over a spotty network to increase resiliency For uploading files that must be appended to existing files
A, C. Multipart Upload is, as should be the easiest answer, ideal for large objects on stable networks (A). But it also helps handle less-reliable networks as smaller parts can fail while others get through, reducing the overall failure rate (C). There is no cost associated with data ingress (B), and D doesn’t make much sense at all!
How is a presigned URL different from a normal URL? (Choose two.) A presigned URL has permissions associated with certain objects provided by the creator of the URL. A presigned URL has permissions associated with certain objects provided by the user of the URL. A presigned URL allows access to private S3 buckets without requiring AWS credentials. A presigned URL includes encrypted credentials as part of the URL.
A, C. Presigned URLs are created to allow users without AWS credentials to access specific resources (option C). And it’s the creator of the URL (option A) that assigns these permissions, rather than the user (option B). Finally, these credentials are associated with the URL but are not encrypted into the URL itself.
How long is a presigned URL valid? 60 seconds 60 minutes 24 hours As long as it is configured to last
D. A presigned URL is always configured at creation for a valid Time to Live (often referred to as TTL). This time can be very short, or quite long.
Which of the following behaviors is consistent with how S3 handles object operations on a bucket? A process writes a new object to Amazon S3 and immediately lists keys within its bucket. The new object does not appear in the list of keys. A process deletes an object, attempts to immediately read the deleted object, and S3 still returns the deleted data. A process deletes an object and immediately lists the keys in the bucket. S3 returns a list with the deleted object in the list. All of the above
D. These are all consistent with S3 behavior. Option A could occur as the new object is being propagated to additional S3 buckets. B and C could occur as a result of eventual consistency, where a DELETE operation does not immediately appear.
Which of the following storage media are object based? (Choose two.) S3-IA EBS EFS S3 standard
A, D. All S3 storage classes are object-based, while EBS and EFS are block-based.
How many PUTs per second does S3 support? 100 1500 3500 5000
C. This is important because it reflects a recent change by AWS. Until 2018, there was a hard limit on S3 of 100 PUTs per second, but that limit has now been raised to 3500 PUTs per second.
What unique domain name do S3 buckets created in US East (N. Virginia) have, as compared to other regions? s3.amazonaws.com s3-us-east-1.amazonaws.com s3-us-east.amazonaws.com s3-amazonaws.com
A. S3 buckets have names based upon the S3 identifier (s3), the region (us-east-1 in this case), and the amazonaws.com domain. Then, the bucket name appears after the domain. That results in a URL like https://s3-us-east-1.amazonaws.com/prototypeBucket32. However, buckets in US East are a special case and should use the special, unique endpoint s3.amazonaws.com (option A).