CloudAcademy: Knowledge Check: Storage (SAA-C03) 2 of 2 Flashcards
Which type of AWS Storage gateway offers an unlimited amount of total storage per gateway? (Choose 2 answers)
A. Tape gateway
B. File gateway
C. Stored volume gateway
D. Cached volume gateway
A. Tape gateway
B. File gateway
Explanation:
File gateways and tape gateways are directly connected to Amazon S3, and Amazon Glacier by extension. As such, each gateway has unlimited total storage capacity, although other factors limit the workload these gateways are able to support.
Volume gateways, as the name implies, are stored on block storage volumes, similar to Amazon EBS. The volumes offer a limited amount of storage, even though the volumes can be stored in Amazon S3. Virtual tape for the tape gateways has a maximum size of 5 TB.
How are costs calculated for Amazon FSx for Windows File Server?
A. Based on storage capacity
B. Based on storage capacity and storage class
C. Based on storage capacity, throughput, and backup
D. Based on storage capacity and throughput
C. Based on storage capacity, throughput, and backup
Explanation:
FSx for Windows has 3 price points: Capacity, Throughput, and Backups.
Again, much like EFS, there are no setup fees for the use of this service, however, you do pay for the amount of storage capacity that you consume. This is priced on the average storage provisioned per month and uses the metric of gigabyte-months and offers varied pricing between a single or multi-AZ deployment.
In addition to the actual storage that you use there is also a cost of for the amount of throughput that you configure for your file systems, this metric is based upon MBps-months. Again, cost variations exist between single and multi-AZ deployment. One point to bear in mind is that any data transfer costs when using multi-AZ is included in the pricing you see for the multi-AZ deployment.
Which of the following statements regarding data deduplication in Amazon FSx is correct?
A. Data deduplication will be obvious to your connected users/clients.
B. Data deduplication significantly affects the performance of the file system.
C. Data deduplication occurs automatically.
D. Data deduplication runs as a frontend, manual process.
C. Data deduplication occurs automatically.
Explanation:
This deduplication can run as a background process, which will not significantly affect the performance of the file system. It also is a transparent part of the file system and will not be obvious to your connected users or clients. Data deduplication is automatic and will continue to scan your file systems in the background, looking for any extra copies of data.
Which of the following options is not one of the three elements that must be configured when performing data transfer from on-premises using AWS DataSync?
A. service
B. location
C. agent
D. task
A. service
Explanation:
When performing data transfer from on-premises, then we need to configure an agent, a location, and a task.
Which EBS volume type is ideal for applications requiring I/O intensive workloads?
A. Cold HDD (SC1)
B. General Purpose SSD volume (GP2)
C. Throughput Optimized HDD (ST1)
D. Provisioned IOPS SSD volume (IO1)
D. Provisioned IOPS SSD volume (IO1)
Explanation:
Provisioned IOPS SSD volumes deliver enhanced predictable performance for applications requiring I/O intensive workloads. They also have the ability to specify IOPS rate during the creation of a new EBS volume. And when the volume is attached to an EBS-optimized instance, EBS will deliver the IOPS defined and required within 10%, 99.9% of the time throughout the year. And the volumes range from four to sixteen terabytes in size. Per volume, the maximum IOPS possible is set to 20,000 IOPS.
Inwhich of the following scenarios will data be lost from an EC2 instance store? (Choose 2 answers)
A. Disk drive failure
B. The instance stops
C. The instance reboots
D. Network failure
A. Disk drive failure
B. The instance stops
Explanation:
Aninstance storeprovides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
Data in the instance store is lost under the following circumstances:
The underlying disk drive fails The instance stops The instance terminates
If the instance reboots (either intentionally or unintentionally) the data persists.
The AWS Snow family consists of a range of _____ that are all designed to enable you to transfer data into AWS from the edge or beyond the cloud.
A. managed relational databases
B. virtual servers
C. cloud compute instances
D. physical hardware devices
D. physical hardware devices
Explanation:
The Snow family consists of a range of physical hardware devices that are all designed to enable you to transfer data into AWS from the edge or beyond the cloud, such as your data center, but they can also be used to transfer data out of AWS too, for example, from Amazon S3 back to your data center.
Which of the following AWS solutions, through integration with AWS Organizations, allows a company to set policies for when EBS volumes and EBS-backed Amazon Machine Images (AMIs) are created, retained, and deleted?
A. Amazon Data Lifecycle Manager
B. EBS Recycle Bin
C. AWS Backup
D. AWS Config
A. Amazon Data Lifecycle Manager
Explanation:
Data Lifecycle Manager enables you to create policies for EBS volume and EBS-backed AMI creation, retention, and deletion.
EBS Recycle Bin allows you to restore snapshots that have been deleted accidentally.
AWS Backup allows a company to create and enforce policies around backing up EBS volumes, which deals with creating the snapshots, not managing or undoing their deletion.
AWS Config creates policies around creating resources, their approved configuration, and what if any changes are made to existing resources.
Which EBS volume type is ideal for large workloads that areaccessed infrequently?
A. Throughput Optimized HDD (ST1)
B. Cold HDD (SC1)
C. Provisioned IOPS SSD volume (IO1)
D. General Purpose SSD volume (GP2)
B. Cold HDD (SC1)
Explanation:
The cold HDD volumes offer the lowest cost compared to all other EBS volumes types. They are suited for workloads that are large in size and accessed infrequently. Their key performance attribute is its throughput capabilities in megabytes per second. It also has the ability to burst up to 80 megabits per second per terabyte, with a maximum burst capacity for each volume set at 250 megabytes per second. It will deliver the expected throughput 99% of the time over a given year, and due to the nature of these volumes, it’s not possible to use these as boot volumes for your EC2 instances.
An Amazon EC2 instance store provides temporary block-level storage for your instance.Ephemeral storage is ideal for ________________.
A. high-performance storage of user files
B. persistent data
C. storing critical system files
D. non-persistent data
D. non-persistent data
Explanation:
An Amazon EC2 Instance Store provides temporary block-level storage for your instance. An instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content. Ephemeral storage is ideal for non-persistent data.
In testing your new EFS file system, your application’s baseline workload quickly exhausts the standard EFS mebibyte per second limits.
What configuration adjustments would allow EFS to provide unlimited mebibytes per second for as long as your application requires?
A. Change from Bursting Throughput mode to Provisioned Throughput.
B. Change from General Purpose performance mode to Max I/O.
C. Change from Max I/O performance mode to General Purpose.
D. Change from Provisioned Throughput mode to Bursting Throughput.
A. Change from Bursting Throughput mode to Provisioned Throughput.
Explanation:
EFS also provides two different throughput modes, and throughput is measured by the rate of mebibytes. The two modes offered are Bursting Throughput and Provisioned Throughput.
With the Bursting Throughput mode, which is the default mode, the amount of throughput scales as your file system grows. So the more you store, the more throughput is available to you. The default throughput available is capable of bursting to 100 mebibytes per second. However, with the standard storage class, this can burst to 100 mebibytes per second per tebibyte of storage used within the file system.
If you are finding that you’re running out of burst credits too often, you might need to consider using the Provisioned Throughput mode. Provisioned Throughput allows you to burst above your allocated allowance, which is based on your file system size. So if your file system was relatively small but the use case for your file system required a high throughput rate, then the default bursting throughput options may not be able to process your request quickly enough.
Which of the following statements about Amazon Elastic File System (EFS) is false?
A. It uses standard operating system APIs, so any application that is designed to work with standard operating system APIs will work with EFS.
B. It uses strong consistency.
C. It uses file locking.
D. Any EFS application deployments that span across multiple availability zones cannot access the same file systems.
D. Any EFS application deployments that span across multiple availability zones cannot access the same file systems.
Explanation:
It uses standard operating system APIs, so any application that is designed to work with standard operating system APIs will work with EFS. It supports both NFS versions 4.1 and 4.0, and uses standard file system semantics such as strong consistency and file locking. It’s replicated across availability zones in a single region, making EFS a highly reliable storage service. The EFS file system is also regional, and so any application deployments that span across multiple availability zones can all access the same file systems, providing a level of high availability of your application storage layer.
Which AWS Storage Gateway configuration allows you to use Amazon Glacier for archiving data?
A. Gateway Virtual Tape Library
B. File gateways
C. Media Changer
D. Cached volume gateways
A. Gateway Virtual Tape Library
Explanation:
Gateway Virtual Tape Library (VTL) allows you to back up your data to S3 from your own corporate data center and leverage Amazon Glacier for data archiving.
Media Changer is a virtual device that manages tapes to and from the tape drive to your VTL.
File gateways allow you to securely store your files as objects within S3. Using as a type of file share which allows you to mount on map drives to an S3 bucket as if the share was held locally on your own corporate network. However, they related to localized storage rather than virtualized storage.
Cached volume gateways store data on Amazon S3 rather than your own local storage, and they do not use Amazon Glacier for data archiving.
You have several EC2 instances deployed in multiple availability zones within a VPC. Some are placed in public subnets while others are placed in private subnets. You are deploying Amazon EFS with a standard storage class.
With this resource configuration, how does AWS recommend you deploy Amazon EFS mount targets?
A. Deploy a single mount target for the entire VPC.
B. Deploy a mount target into each availability zone within the VPC.
C. Deploy a mount target for each EC2 instance in an availability zone.
D. Deploy one mount target for instances in public subnets and another mount target for instances in private subnets.
B. Deploy a mount target into each availability zone within the VPC.
Explanation:
It is possible to deploy a single mount target for the entire VPC, but this is not recommended due to added costs for EFS mount targets communicating with instances in different availability zones. The recommended approach is to deploy a mount target into each availability zone. The subnet types will not be an issue, and one mount target per EC2 instance defeats the purpose of the service’s capability to communicate with a large number of instances simultaneously.
As a member of the data management team, you are reviewing which Amazon EFS storage classes for the company’s various data types.
Application backup files need to be stored but only accessed in the event of a critical failure, which rarely occurs. The files are critical to system recovery, and it is important to mitigate data loss as much as possible.
Which EFS storage class would be most effective for storing these application backup files?
A. EFS One Zone-Infrequent Access (IA)
B. EFS Standard-Infrequent Access (IA)
C. EFS One Zone
D. EFS Standard
B. EFS Standard-Infrequent Access (IA)
Explanation:
The Standard–IA storage class reduces storage costs for files that are not accessed every day. It does this without sacrificing the high availability, high durability, elasticity, and POSIX file system access that Amazon EFS provides.
We recommend Standard-IA storage if you need your full dataset to be readily accessible and want to automatically save on storage costs for files that are less frequently accessed. Examples include keeping files accessible to satisfy audit requirements, performing historical analysis, or performing backup and recovery. Standard-IA storage is compatible with all Amazon EFS features, and is available in all AWS Regions where Amazon EFS is available.