CloudAcademy: Knowledge Check: Storage (SAA-C03) 2 of 2 Flashcards

1
Q

Which type of AWS Storage gateway offers an unlimited amount of total storage per gateway? (Choose 2 answers)

A. Tape gateway
B. File gateway
C. Stored volume gateway
D. Cached volume gateway

A

A. Tape gateway
B. File gateway

Explanation:
File gateways and tape gateways are directly connected to Amazon S3, and Amazon Glacier by extension. As such, each gateway has unlimited total storage capacity, although other factors limit the workload these gateways are able to support.

Volume gateways, as the name implies, are stored on block storage volumes, similar to Amazon EBS. The volumes offer a limited amount of storage, even though the volumes can be stored in Amazon S3. Virtual tape for the tape gateways has a maximum size of 5 TB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How are costs calculated for Amazon FSx for Windows File Server?

A. Based on storage capacity
B. Based on storage capacity and storage class
C. Based on storage capacity, throughput, and backup
D. Based on storage capacity and throughput

A

C. Based on storage capacity, throughput, and backup

Explanation:
FSx for Windows has 3 price points: Capacity, Throughput, and Backups.

Again, much like EFS, there are no setup fees for the use of this service, however, you do pay for the amount of storage capacity that you consume. This is priced on the average storage provisioned per month and uses the metric of gigabyte-months and offers varied pricing between a single or multi-AZ deployment.

In addition to the actual storage that you use there is also a cost of for the amount of throughput that you configure for your file systems, this metric is based upon MBps-months. Again, cost variations exist between single and multi-AZ deployment. One point to bear in mind is that any data transfer costs when using multi-AZ is included in the pricing you see for the multi-AZ deployment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following statements regarding data deduplication in Amazon FSx is correct?

A. Data deduplication will be obvious to your connected users/clients.
B. Data deduplication significantly affects the performance of the file system.
C. Data deduplication occurs automatically.
D. Data deduplication runs as a frontend, manual process.

A

C. Data deduplication occurs automatically.

Explanation:
This deduplication can run as a background process, which will not significantly affect the performance of the file system. It also is a transparent part of the file system and will not be obvious to your connected users or clients. Data deduplication is automatic and will continue to scan your file systems in the background, looking for any extra copies of data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following options is not one of the three elements that must be configured when performing data transfer from on-premises using AWS DataSync?

A. service
B. location
C. agent
D. task

A

A. service

Explanation:
When performing data transfer from on-premises, then we need to configure an agent, a location, and a task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which EBS volume type is ideal for applications requiring I/O intensive workloads?

A. Cold HDD (SC1)
B. General Purpose SSD volume (GP2)
C. Throughput Optimized HDD (ST1)
D. Provisioned IOPS SSD volume (IO1)

A

D. Provisioned IOPS SSD volume (IO1)

Explanation:
Provisioned IOPS SSD volumes deliver enhanced predictable performance for applications requiring I/O intensive workloads. They also have the ability to specify IOPS rate during the creation of a new EBS volume. And when the volume is attached to an EBS-optimized instance, EBS will deliver the IOPS defined and required within 10%, 99.9% of the time throughout the year. And the volumes range from four to sixteen terabytes in size. Per volume, the maximum IOPS possible is set to 20,000 IOPS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Inwhich of the following scenarios will data be lost from an EC2 instance store? (Choose 2 answers)

A. Disk drive failure
B. The instance stops
C. The instance reboots
D. Network failure

A

A. Disk drive failure
B. The instance stops

Explanation:
Aninstance storeprovides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.

Data in the instance store is lost under the following circumstances:

The underlying disk drive fails
The instance stops
The instance terminates

If the instance reboots (either intentionally or unintentionally) the data persists.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The AWS Snow family consists of a range of _____ that are all designed to enable you to transfer data into AWS from the edge or beyond the cloud.

A. managed relational databases
B. virtual servers
C. cloud compute instances
D. physical hardware devices

A

D. physical hardware devices

Explanation:
The Snow family consists of a range of physical hardware devices that are all designed to enable you to transfer data into AWS from the edge or beyond the cloud, such as your data center, but they can also be used to transfer data out of AWS too, for example, from Amazon S3 back to your data center.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which of the following AWS solutions, through integration with AWS Organizations, allows a company to set policies for when EBS volumes and EBS-backed Amazon Machine Images (AMIs) are created, retained, and deleted?

A. Amazon Data Lifecycle Manager
B. EBS Recycle Bin
C. AWS Backup
D. AWS Config

A

A. Amazon Data Lifecycle Manager

Explanation:
Data Lifecycle Manager enables you to create policies for EBS volume and EBS-backed AMI creation, retention, and deletion.

EBS Recycle Bin allows you to restore snapshots that have been deleted accidentally.

AWS Backup allows a company to create and enforce policies around backing up EBS volumes, which deals with creating the snapshots, not managing or undoing their deletion.

AWS Config creates policies around creating resources, their approved configuration, and what if any changes are made to existing resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which EBS volume type is ideal for large workloads that areaccessed infrequently?

A. Throughput Optimized HDD (ST1)
B. Cold HDD (SC1)
C. Provisioned IOPS SSD volume (IO1)
D. General Purpose SSD volume (GP2)

A

B. Cold HDD (SC1)

Explanation:
The cold HDD volumes offer the lowest cost compared to all other EBS volumes types. They are suited for workloads that are large in size and accessed infrequently. Their key performance attribute is its throughput capabilities in megabytes per second. It also has the ability to burst up to 80 megabits per second per terabyte, with a maximum burst capacity for each volume set at 250 megabytes per second. It will deliver the expected throughput 99% of the time over a given year, and due to the nature of these volumes, it’s not possible to use these as boot volumes for your EC2 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

An Amazon EC2 instance store provides temporary block-level storage for your instance.Ephemeral storage is ideal for ________________.

A. high-performance storage of user files
B. persistent data
C. storing critical system files
D. non-persistent data

A

D. non-persistent data

Explanation:
An Amazon EC2 Instance Store provides temporary block-level storage for your instance. An instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content. Ephemeral storage is ideal for non-persistent data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

In testing your new EFS file system, your application’s baseline workload quickly exhausts the standard EFS mebibyte per second limits.

What configuration adjustments would allow EFS to provide unlimited mebibytes per second for as long as your application requires?

A. Change from Bursting Throughput mode to Provisioned Throughput.
B. Change from General Purpose performance mode to Max I/O.
C. Change from Max I/O performance mode to General Purpose.
D. Change from Provisioned Throughput mode to Bursting Throughput.

A

A. Change from Bursting Throughput mode to Provisioned Throughput.

Explanation:
EFS also provides two different throughput modes, and throughput is measured by the rate of mebibytes. The two modes offered are Bursting Throughput and Provisioned Throughput.

With the Bursting Throughput mode, which is the default mode, the amount of throughput scales as your file system grows. So the more you store, the more throughput is available to you. The default throughput available is capable of bursting to 100 mebibytes per second. However, with the standard storage class, this can burst to 100 mebibytes per second per tebibyte of storage used within the file system.

If you are finding that you’re running out of burst credits too often, you might need to consider using the Provisioned Throughput mode. Provisioned Throughput allows you to burst above your allocated allowance, which is based on your file system size. So if your file system was relatively small but the use case for your file system required a high throughput rate, then the default bursting throughput options may not be able to process your request quickly enough.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following statements about Amazon Elastic File System (EFS) is false?

A. It uses standard operating system APIs, so any application that is designed to work with standard operating system APIs will work with EFS.
B. It uses strong consistency.
C. It uses file locking.
D. Any EFS application deployments that span across multiple availability zones cannot access the same file systems.

A

D. Any EFS application deployments that span across multiple availability zones cannot access the same file systems.

Explanation:
It uses standard operating system APIs, so any application that is designed to work with standard operating system APIs will work with EFS. It supports both NFS versions 4.1 and 4.0, and uses standard file system semantics such as strong consistency and file locking. It’s replicated across availability zones in a single region, making EFS a highly reliable storage service. The EFS file system is also regional, and so any application deployments that span across multiple availability zones can all access the same file systems, providing a level of high availability of your application storage layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which AWS Storage Gateway configuration allows you to use Amazon Glacier for archiving data?

A. Gateway Virtual Tape Library
B. File gateways
C. Media Changer
D. Cached volume gateways

A

A. Gateway Virtual Tape Library

Explanation:
Gateway Virtual Tape Library (VTL) allows you to back up your data to S3 from your own corporate data center and leverage Amazon Glacier for data archiving.

Media Changer is a virtual device that manages tapes to and from the tape drive to your VTL.

File gateways allow you to securely store your files as objects within S3. Using as a type of file share which allows you to mount on map drives to an S3 bucket as if the share was held locally on your own corporate network. However, they related to localized storage rather than virtualized storage.

Cached volume gateways store data on Amazon S3 rather than your own local storage, and they do not use Amazon Glacier for data archiving.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You have several EC2 instances deployed in multiple availability zones within a VPC. Some are placed in public subnets while others are placed in private subnets. You are deploying Amazon EFS with a standard storage class.

With this resource configuration, how does AWS recommend you deploy Amazon EFS mount targets?

A. Deploy a single mount target for the entire VPC.
B. Deploy a mount target into each availability zone within the VPC.
C. Deploy a mount target for each EC2 instance in an availability zone.
D. Deploy one mount target for instances in public subnets and another mount target for instances in private subnets.

A

B. Deploy a mount target into each availability zone within the VPC.

Explanation:
It is possible to deploy a single mount target for the entire VPC, but this is not recommended due to added costs for EFS mount targets communicating with instances in different availability zones. The recommended approach is to deploy a mount target into each availability zone. The subnet types will not be an issue, and one mount target per EC2 instance defeats the purpose of the service’s capability to communicate with a large number of instances simultaneously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

As a member of the data management team, you are reviewing which Amazon EFS storage classes for the company’s various data types.

Application backup files need to be stored but only accessed in the event of a critical failure, which rarely occurs. The files are critical to system recovery, and it is important to mitigate data loss as much as possible.

Which EFS storage class would be most effective for storing these application backup files?

A. EFS One Zone-Infrequent Access (IA)
B. EFS Standard-Infrequent Access (IA)
C. EFS One Zone
D. EFS Standard

A

B. EFS Standard-Infrequent Access (IA)

Explanation:
The Standard–IA storage class reduces storage costs for files that are not accessed every day. It does this without sacrificing the high availability, high durability, elasticity, and POSIX file system access that Amazon EFS provides.

We recommend Standard-IA storage if you need your full dataset to be readily accessible and want to automatically save on storage costs for files that are less frequently accessed. Examples include keeping files accessible to satisfy audit requirements, performing historical analysis, or performing backup and recovery. Standard-IA storage is compatible with all Amazon EFS features, and is available in all AWS Regions where Amazon EFS is available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

When should you use an AWS Snowcone over a Snowball?

A. when you need to transfer data using S3 APIs
B. when you require the use of usable SSD storage
C. when you need to perform online data transfer to AWS
D. when you require the Snow device to be HIPAA compliant

A

C. when you need to perform online data transfer to AWS

Explanation:
You would use the Snowcone if you needed the ability to perform online data transfer using AWS DataSync, preventing you the need to send the Snowcone back to AWS for an off-line data transfer.

17
Q

You need a shared file system that integrates with Amazon S3, supports multiple connections simultaneously, and is ideal for compute-intensive workloads, such as high-performance computing. Which AWS service should you use?

A. Amazon Elastic File Service (EFS)
B. Amazon FSx for Lustre
C. Amazon Elastic Block Storage (EBS)
D. Amazon FSx for Windows File Server

A

B. Amazon FSx for Lustre

Explanation:
Amazon FSx for Lustre is a fully managed file system designed for compute-intensive workloads, for example, Machine Learning and high-performance computing. It has the ability to process massive data sets. Performance can run up to hundreds of GB per second of throughput, millions of IOPS, and sub-millisecond latencies. It has integration with Amazon S3 and supports and supports cloud-bursting workloads from on-premises over Direct Connect and VPN connections.

18
Q

What is AWS DataSync?

A. a transfer data service from on-premises to AWS or between two different AWS storage services
B. a service that allows you to create a directional business case for AWS to gain visibility into the costs and savings associated with migration strategies
C. a service that automates, schedules, and tracks incremental replications of live server volumes
D. a Redis-compatible, durable, in-memory database that delivers ultra-fast performance

A

A. a transfer data service from on-premises to AWS or between two different AWS storage services

Explanation:
AWS DataSync is a service that allows you to easily and securely transfer data from your on-premises data center to AWS storage services. It can also be used to manage data transfer between two different AWS storage services, too.

19
Q

Which feature designates the frequency, storage, replication, and tagging of backups managed by AWS Backup?

A. The Backup vault
B. The Backup storage class
C. The Backup plan
D. The Backup framework

A

C. The Backup plan

Explanation:
When using AWS Backup you will need to create backup policies or backup plans. These simply determine the exact requirements that you need for your backups and contain information such as:

A backup schedule
Backup window
Lifecycle rules, such as the transition of data to cold storage after a set period
A backup vault, which is where your backups are stored and encrypted through the use of KMS encryption keys
Regional copies
Tags

Once you have created your backup plans, you can assign resources to them. This allows you to create multiple backup plans each with different criteria to meet the backup needs of different types of resources. Through the use of tags, you can associate multiple resources at once using tag-based backup policies, this ensures you capture all of the required resources at once within your plan.

20
Q

In Amazon Elastic File System (EFS), which of the following performance modes is ideal for applications where thousands of Elastic Compute Cloud (EC2) instances access a file system?

A. General Purpose mode
B. Enhanced Throughput mode
C. Max I/O mode
D. Provisioned IOPS mode

A

C. Max I/O mode

Explanation:
Amazon EFS offers two performance modes: General Purpose mode and Max I/O mode. The performance mode is selected when the file system is created. The Max I/O performance mode is best suited for applications where multiple EC2 instances access the file system, as it can scale to higher levels of aggregate throughput and operations per second with a tradeoff of negligibly higher latencies for file operations.

21
Q

Which statement about EC2 instance store volumes is incorrect?

A. The instance store volumes reside on the same hardware as host in stance.
B. They are available for all instance types.
C. Instance store volumes offer very high I/O speed.
D. The storage cost is included in the EC2 instance price.

A

B. They are available for all instance types.

Explanation:
Even though EC2 instance store volumes are part of the EC2 service itself, they are not available for all instance types. To see which instance types offer it, be sure to review the documentation closely.

22
Q

Which of the following statements about AWS DataSync is false?

A. Tasks copy your storage data, file system permissions, and settings.
B. The task contains the locations that were created and specified for both the source and destination, in addition to the configuration and conditions of how the data transfer will take place.
C. Every time you create a DataSync task you will need to specify the source location and the destination location, dictating where you want to move data from and to.
D. The agent is simply a virtual machine supported by VMware ESXi, KVM, or Microsoft Hyper-V hypervisors.

A

A. Tasks copy your storage data, file system permissions, and settings.

Explanation:
When performing data transfer from on-premises, then we need to configure an agent, a location, and a task. The agent will be used on the customer side, so it sits outside of AWS, and it’s just a virtual machine supported by VMware ESXi, KVM, or Microsoft Hyper-V hypervisors, so it should be compatible with your existing infrastructure. The location identifies the endpoint of a DataSync task. So as a result, every time you create a DataSync task you will need to specify the source location and the destination location, dictating where you want to move data from and to. The task contains the details of the operation that you are trying to carry out and perform with DataSync, so it will contain the locations that were created and specified for both the source and destination, in addition to the configuration and conditions of how the data transfer will take place. Before I move on, I just want to highlight that the DataSync tasks will only copy your storage data; it doesn’t include any file systems permissions or settings.

23
Q

Which types of AWS Storage Gateway offer local caches for frequently accessed data, with infrequently accessed data stored in the cloud? (Choose 2 answers)

A. File gateways
B. Tape gateways
C . Cached volume gateways
D. Stored volume gateways

A

A. File gateways
C . Cached volume gateways

Explanation:
Both file gateways and cached volume gateways provide local caches to store frequently accessed data. Stored volume gateways keep all files locally, so all stored data can be retrieve with reduced latency. Tape gateways are an archival method, and not ideal for data that needs to be readily available.

24
Q

Which AWS solution allows you to centralize and automate the data resilience and restoration processes across numerous AWS services such as Amazon S3, Amazon EBS, Amazon RDS, and Amazon DynamoDB?

A. AWS Config
B. AWS Resource Groups
C. Amazon Data Lifecycle Manager
D. AWS Backup

A

D. AWS Backup

Explanation:
AWS Backup allows a company to create and enforce policies around backing up numerous data services, and centralizes management of several AWS accounts.

Data Lifecycle Manager enables you to create policies for EBS volume and EBS-backed AMI creation, retention, and deletion.

AWS Config creates policies around creating resources, their approved configuration, and what if any changes are made to existing resources.

AWS Resource Groups allows you to manage numerous related resources as a single resource.

25
Q

Which of the following lists correctly names the AWS Snow family of devices, in order from smallest to largest?

A. Snowball, Snowmobile, Snowcone
B. Snowcone, Snowball, Snowmobile
C. Snowball, Snowcone, Snowmobile
D. Snowmobile, Snowball, Snowcone

A

B. Snowcone, Snowball, Snowmobile

Explanation:
As you can see from this table, both from a physical and capacity perspective, the Snowcone is the smallest, followed by the Snowball and finally the Snowmobile.

26
Q

You have decided to use AWS Storage Gateway, but want all data within the gateway to be retrievable to on-premise employees with minimallatency. You’ve decided to use block storage that backs up EBS snapshots. Which type of storage gateway would best suit you?

A. FSx File Gateway
B. S3 File gateway
C. Volume gateway
D. Tape gateway

A

C. Volume gateway

Explanation:
Both file gateways and cached volume gateways provide local caches to store frequently accessed data. Stored volume gateways keep all files locally, so all stored data can be retrieved with low latency, and so is the best option in this case.