Cloud Tiering documentation Flashcards

1
Q

What is Cloud Tiering?

A

Based on NetApp FabricPool technology, Cloud Tiering identifies infrequently-used data in your ONTAP clusters and automatically and seamlessly moves that data to low-cost object storage in the cloud.

NetApp’s Cloud Tiering service extends your data center to the public cloud by automatically tiering inactive data from ONTAP clusters to object storage. This frees valuable space on the cluster for more workloads, without making changes to the application layer. Cloud Tiering can reduce costs in your data center and enables a switch from a CAPEX model to an OPEX model.

The Cloud Tiering service leverages the capabilities of FabricPool. FabricPool is a NetApp Data Fabric technology that enables automated tiering of data to low-cost object storage. Active data remains on high-performance SSDs, while inactive data is tiered to low-cost object storage while preserving ONTAP data efficiencies.

Cloud Tiering offers automation, monitoring, reports, and a common management interface:

Automation makes it easier to set up and manage data tiering from ONTAP clusters to the cloud

A single pane of glass removes the need to independently manage FabricPool across several clusters

Reports show the amount of active and inactive data on each cluster

A tiering health status helps you identify and correct issues as they occur

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What’s new in Cloud Tiering?

A

NetApp periodically updates Cloud Tiering to bring you new features, enhancements, and bug fixes.

4 July 2019

Cloud Tiering was updated to fix a few bugs.

10 June 2019

This update includes the following new features and enhancements.

Cloud Tiering is now generally available
The Controlled Availability release of Cloud Tiering has completed—​Cloud Tiering is now available for customer use from NetApp Cloud Central. A 30-day free trial is available for both AWS and Azure. It starts when you set up tiering to your first cluster.

Pay-as-you-go from the AWS Marketplace
After your free trial starts, subscribe to the Cloud Tiering service to ensure that there’s no disruption of service after the trial ends. When it ends, you’ll be charged hourly according to the amount of data that you tier.

Learn how to subscribe from the AWS Marketplace.

We’re planning to add Cloud Tiering to the Azure Marketplace as soon Azure supports SaaS pricing.
Support for FlexGroup volumes
You can now tier inactive data from FlexGroup volumes to object storage, starting with ONTAP 9.5. Setup works the same as any other volume.

5 May 2019

This update includes the following new features and enhancements.

Support for Microsoft Azure
You can now tier inactive data from your ONTAP clusters to Azure Blob storage.

Learn how to tier inactive data to Azure

Review support for Azure Blob access tiers and Azure regions

Ability to choose an IPspace for connections to object storage
When you set up tiering for an ONTAP cluster, you now need to select the IPspace that ONTAP should use to connect to object storage. Selecting the correct IPspace ensures that Cloud Tiering can set up a connection from ONTAP to your cloud provider’s object storage.

A screenshot that that shows the Cluster Connectivity page that is available when setting up tiering on a new cluster. The page enables you to choose an IPspace on the ONTAP cluster.

To understand the requirements for the IPspace and the associated intercluster LIFs, refer to ONTAP cluster requirements:

Preparing to tier inactive data to AWS S3

Preparing to tier inactive data to Azure Blob storage

7 Apr 2019

This update includes the following new features and enhancements.

Support for FAS systems with all-SSD aggregates

Support for additional versions of ONTAP

Ability to choose the type of cold data that you want to tier

Ability to choose an S3 storage class

Support for FAS systems with all-SSD aggregates
In addition to AFF systems, Cloud Tiering now supports FAS systems that have one or more all-SSD aggregates.

Support for additional versions of ONTAP
Cloud Tiering now supports ONTAP 9.2 and 9.3. This is in addition to supporting ONTAP 9.4 and later.

Ability to choose the type of cold data that you want to tier
For ONTAP 9.4 and later, you can now choose the type of cold data that you want to tier. You can tier all cold data or just Snapshot copies.

If you have an ONTAP 9.2 or 9.3 system, you can tier Snapshot copies only.

The option is available after you click Set up Tiering.

Learn more about the cold data that you can tier

Learn how to tier data from your first cluster

Learn how to tier data from additional volumes

Ability to choose an S3 storage class
When you set up data tiering on a cluster for the first time, you can now choose to tier cold data to the S3 Standard storage class or to the Standard-Infrequent Access storage class. Learn about these S3 storage classes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does Cloud Tiering work?

A

The following image shows the relationship between each component:

An architecture image that shows the Cloud Tiering service with a connection to the Service Connector in your cloud provider

At a high level, Cloud Tiering works like this:

You deploy a NetApp Service Connector in an AWS VPC (if you want to tier to S3) or in an Azure VNet (if you want to tier to Blob storage).

The Service Connector communicates with ONTAP clusters to discover the amount of active and inactive data on the cluster.

You choose the volumes to tier and the tiering policy to apply to those volumes.

You choose the object storage provider that you want to use.

The Service Connector configures ONTAP to use that object storage provider.

The Service Connector configures data tiering on each volume that you selected.

ONTAP starts tiering inactive data to the object store, as soon as the data has reached the thresholds to be considered inactive (see Volume tiering policies).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

NetApp Service Connector

A

The Service Connector is software that communicates with ONTAP clusters to discover the amount of active and inactive data on the cluster and to set up data tiering. Cloud Tiering prompts you to deploy the Service Connector when you discover your first ONTAP cluster. Connections to ONTAP take place from the Service Connector. A single Service Connector can discover multiple ONTAP clusters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Object storage

A

Each ONTAP cluster tiers inactive data to a single object store that Cloud Tiering creates for you. Cloud Tiering prompts you to choose a cloud storage provider when you set up tiering for each cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

S3 storage classes

A

When you set up data tiering to AWS, you can choose the S3 Standard storage class or the Standard-Infrequent Access storage class. If you choose Standard-Infrequent, inactive data starts in the Standard storage class and transitions to Standard-Infrequent Access, if the data is not accessed after 30 days.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Amazon S3 Storage Classes

A

Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation. Amazon S3 also offers capabilities to manage your data throughout its lifecycle. Once an S3 Lifecycle policy is set, your data will automatically transfer to a different storage class without any changes to your application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Amazon S3 Storage Classes: General purpose

A

Amazon S3 Standard (S3 Standard)
S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. Because it delivers low latency and high throughput, S3 Standard is appropriate for a wide variety of use cases, including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics. S3 Storage Classes can be configured at the object level and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
Key Features:

Low latency and high throughput performance
Designed for durability of 99.999999999% of objects across multiple Availability Zones
Resilient against events that impact an entire Availability Zone
Designed for 99.99% availability over a given year
Backed with the Amazon S3 Service Level Agreement for availability
Supports SSL for data in transit and encryption of data at rest
S3 Lifecycle management for automatic migration of objects to other S3 Storage Classes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Amazon S3 Storage Classes: Unknown or changing access

A

Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)
The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. For a small monthly monitoring and automation fee per object, Amazon S3 monitors access patterns of the objects in S3 Intelligent-Tiering, and moves the ones that have not been accessed for 30 consecutive days to the infrequent access tier. If an object in the infrequent access tier is accessed, it is automatically moved back to the frequent access tier. There are no retrieval fees when using the S3 Intelligent-Tiering storage class, and no additional tiering fees when objects are moved between access tiers. It is the ideal storage class for long-lived data with access patterns that are unknown or unpredictable. S3 Storage Classes can be configured at the object level and a single bucket can contain objects stored in S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can upload objects directly to S3 Intelligent-Tiering, or use S3 Lifecycle policies to transfer objects from S3 Standard and S3 Standard-IA to S3 Intelligent-Tiering. You can also archive objects from S3 Intelligent-Tiering to S3 Glacier.
Key Features:

Same low latency and high throughput performance of S3 Standard
Small monthly monitoring and auto-tiering fee
Automatically moves objects between two access tiers based on changing access patterns
Designed for durability of 99.999999999% of objects across multiple Availability Zones
Resilient against events that impact an entire Availability Zone
Designed for 99.9% availability over a given year
Backed with the Amazon S3 Service Level Agreement for availability
Supports SSL for data in transit and encryption of data at rest
S3 Lifecycle management for automatic migration of objects to other S3 Storage Classes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Amazon S3 Storage Classes: Infrequent access

A

Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. S3 Storage Classes can be configured at the object level and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
Key Features:

Same low latency and high throughput performance of S3 Standard
Designed for durability of 99.999999999% of objects across multiple Availability Zones
Resilient against events that impact an entire Availability Zone
Data is resilient in the event of one entire Availability Zone destruction
Designed for 99.9% availability over a given year
Backed with the Amazon S3 Service Level Agreement for availability
Supports SSL for data in transit and encryption of data at rest
S3 Lifecycle management for automatic migration of objects to other S3 Storage Classes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Amazon S3 Storage Classes: Infrequent access

A

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), S3 One Zone-IA stores data in a single AZ and costs 20% less than S3 Standard-IA. S3 One Zone-IA is ideal for customers who want a lower-cost option for infrequently accessed data but do not require the availability and resilience of S3 Standard or S3 Standard-IA. It’s a good choice for storing secondary backup copies of on-premises data or easily re-creatable data. You can also use it as cost-effective storage for data that is replicated from another AWS Region using S3 Cross-Region Replication.

S3 One Zone-IA offers the same high durability†, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. S3 Storage Classes can be configured at the object level, and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
Key Features:

Same low latency and high throughput performance of S3 Standard
Designed for durability of 99.999999999% of objects in a single Availability Zone†
Designed for 99.5% availability over a given year
Backed with the Amazon S3 Service Level Agreement for availability
Supports SSL for data in transit and encryption of data at rest
S3 Lifecycle management for automatic migration of objects to other S3 Storage Classes
† Because S3 One Zone-IA stores data in a single AWS Availability Zone, data stored in this storage class will be lost in the event of Availability Zone destruction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Amazon S3 Storage Classes: Amazon S3 Glacier (S3 Glacier)

A

Amazon S3 Glacier (S3 Glacier)
S3 Glacier is a secure, durable, and low-cost storage class for data archiving. You can reliably store any amount of data at costs that are competitive with or cheaper than on-premises solutions. To keep costs low yet suitable for varying needs, S3 Glacier provides three retrieval options that range from a few minutes to hours. You can upload objects directly to S3 Glacier, or use S3 Lifecycle policies to transfer data between any of the S3 Storage Classes for active data (S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA) and S3 Glacier. For more information, visit the Amazon S3 Glacier page »
Key Features:

Designed for durability of 99.999999999% of objects across multiple Availability Zones
Data is resilient in the event of one entire Availability Zone destruction
Supports SSL for data in transit and encryption of data at rest
Low-cost design is ideal for long-term archive
Configurable retrieval times, from minutes to hours
S3 PUT API for direct uploads to S3 Glacier, and S3 Lifecycle management for automatic migration of objects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Amazon S3 Storage Classes: Glacier Deep Archive

A
S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports long-term retention and digital preservation for data that may be accessed once or twice in a year. It is designed for customers — particularly those in highly-regulated industries, such as the Financial Services, Healthcare, and Public Sectors — that retain data sets for 7-10 years or longer to meet regulatory compliance requirements. S3 Glacier Deep Archive can also be used for backup and disaster recovery use cases, and is a cost-effective and easy-to-manage alternative to magnetic tape systems, whether they are on-premises libraries or off-premises services. S3 Glacier Deep Archive complements Amazon S3 Glacier, which is ideal for archives where data is regularly retrieved and some of the data may be needed in minutes. All objects stored in S3 Glacier Deep Archive are replicated and stored across at least three geographically-dispersed Availability Zones, protected by 99.999999999% of durability, and can be restored within 12 hours.
Key Features:

Designed for durability of 99.999999999% of objects across multiple Availability Zones
Lowest cost storage class designed for long-term retention of data that will be retained for 7-10 years
Ideal alternative to magnetic tape libraries
Retrieval time within 12 hours
S3 PUT API for direct uploads to S3 Glacier Deep Archive, and S3 Lifecycle management for automatic migration of objects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Azure Blob access tiers

A

When you set up data tiering to Azure, Cloud Tiering automatically uses the Hot access tier for your inactive data. We plan to add support for the Cool access tier in a future release.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Volume tiering policies

A

When you select the volumes that you want to tier, you also choose the volume tiering policy to apply to those volumes. A tiering policy defines the type of cold data that you want to tier from the volumes.

Auto
ONTAP tiers all cold blocks in the volume to object storage. The cold data includes not just Snapshot copies but also cold user data from the active file system.

If read by random reads, cold data blocks on the cloud tier become hot and are moved to the performance tier. If read by sequential reads, such as those associated with index and antivirus scans, cold data blocks on the cloud tier stay cold and are not written to the performance tier.

Data is tiered only after an aggregate has reached 50% capacity and when the data has reached the cooling period, which is approximately 31 days. The cooling period is the time that user data in a volume must remain inactive for the data to be considered “cold” and moved to the object store.

Snapshot only
ONTAP tiers cold Snapshot blocks in the volume that are not shared with the active file system to object storage. If read, cold data blocks on the cloud tier become hot and are moved to the performance tier.

Data is tiered only after an aggregate has reached 50% capacity and when the data has reached the cooling period, which is approximately 2 days.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How Cloud Tiering licenses work

A

Cloud Tiering offers capacity-based licensing in a pay-as-you-go model. We plan to offer a bring your own license (BYOL) model in the coming months.

A 30-day free trial of Cloud Tiering starts when you set up tiering to your first cluster. You should subscribe to the Cloud Tiering service to ensure that there’s no disruption of service after your free trial ends. When the trial ends, you’ll be charged hourly according to the amount of data that you tier.

Learn how to subscribe from the AWS Marketplace.

We’re planning to add Cloud Tiering to the Azure Marketplace as soon Azure supports SaaS pricing.

What happens if my free trial ends and I don’t subscribe?
No new data is tiered to object storage, but existing data is still available for access. Also, you can’t set up data tiering for any additional clusters.