Module 3: Business Continuity Solutions Flashcards
Business Continuity: use cases
This module discusses some use cases that enable you to better understand business continuity challenges in a real world scenario. Your business continuity challenges might overlap, so you might need more than one solution to meet your requirements review each case in detail and explore various solutions for these challenges, click next to continue.
Use Case 1 = non-disruptive for most critical SAN workloads, zero data loss
Use case 2 = SAN and NAS workloads, zero data loss, Non-disruptive operations
Snapmirror Business Continuity
SnapMirror, your business continuity protects your data lens which enables applications to fail over transparent for business continuity in case of a disaster snap mirror business continuity is a business continuity solution for zero recovery point, objective or RPO and new zero recovery time, objective, or RTO snap mirror business continuity gives you flexibility with easy to use application, level, granularity, and automatic failover Snap mirror business continuity uses snap, mirror, synchronous or SMS replication over the IP network to replicate data, high speeds overland or when this approach achieves high data, availability and data replication for business critical applications, such as Oracle and Microsoft sequel, server virtual and physical environments with snap mirror business continuity. You add lens to the consistency group to protect applications that are spread across multiple volumes. The identity should be the same on both primary and secondary clusters. If the Primary system goes off-line, snap, mirror business, continuity, orchestrates, automated failover, which makes manual intervention, unnecessary
Uses SM-S technology for zero RPO, provides application granularity when LUNs are added to Consistency Group. Uses same LUN identity on both sides, provides transparent automatic failover.
ONTAP Mediator
On tap mediator is an application application that is provisioned on a Linux host and a third site on tap, mediator monitors storage systems and cluster and connectivity to verify that clusters are online and healthy on tap mediator establishes a quorum preventing split brain scenarios were no assume they are the soul, surviving member and create multiple Primary notes The on tap, mediator service, maintains a consensus with the Primary cluster serving and replicating to the secondary cluster when the clusters are not synchronised, the cluster with the consensus serves the primary cluster can obtain the consensus on tap mediator by marking. The secondary cluster as capable. The primary cluster is given preference in obtaining the consensus, and the secondary cluster is not aggressive.
Consistency Group
A consistency group is a collection of flex fall volumes that provides right order consistency across multiple volumes for an application workload. A consistency group supports any protocol, such as SAN, NAS or NVME and can be managed via tap system manager, arrest, API consistency group, simplify application, workload management providing crash, consistent or application. Consistent point in time snapshot copies of multiple volumes in regular deployment volumes within a consistency group map to an application instance snap mirror business continuity establishes consistency group relationships between source and destination consistency groups with Primary and mirror consistency groups containing the same number and type of volumes The individual volumes that form the consistency group are referred to as constituents or items. Snap mirror business continuity supports 50 consistency groups and 400 relationships starting from on tap 9.12.1 volume can be added or removed from an existing consistency group to accommodate application deployment changes,
Snapmirror Business Continuity Architecture
[in a snap, mirror, business, continuity architecture, both clusters serve Primary workloads Aiding compliance with periodic service ability regulations. A data protection relationship is established between the source and destination storage systems by adding applications specific lungs from different volumes with a SVM to the consistency group during normal operations, the Primary group synchronously replicates to the mirror group when the primary storage system fails on tap, mediator enable seamless application failover to the mirror group eliminating the need for manual intervention or scripting.
Services primary workload from both clusters.
Adds application-specific LUNs from different volumes within a SVM to a consistency group.
Uses ONTAP mediator to detect failure and enable seamless application failover to the mirror consistency group.
MetroCluster Solution
Metro cluster enables synchronous mirroring the volumes between two storage controllers, providing high availability or HA storage and disaster recovery. A Metro cluster configuration consist of two on tap storage systems, each residing in the same data centre or two different physical locations cluster together metro cluster provides high availability zero data loss and non-disruptive operations or NDA worldwide. Metro cluster supports various and host protocols and is available as metro cluster FC metro cluster IP and stretch metro cluster. A Metro cluster configuration reduces acquisition and ownership cost due to easily manageable architectureMetro cluster capabilities, seamlessly integrate into on tap software without the need for additional licenses. Metro cluster supports net flex group volumes, fabric, technology, and SMDR. As a source. Metro cluster enables smooth transition from a 4FC configuration to a new IP configuration, maintaining workload and data integrity.
MetroCluster features
Provides continuous data availability across data centers with near-zero data loss
Supports SAN and NAS systems (MetroCluster FC, MetroCluster IP)
Requires no additional licensing costs to setup the configuration
Supports NetApp FlexGroup volumes, FabricPool technology and SVM DR (as a source only)
Supports nondisruptive transition from a four-node MetroCluster FC configuration to a MetroCluster IP configuration
Metro Cluster Configurations
Metro cluster configurations include two note for note and eight notes set ups in a two note configuration. Each site has a single note cluster without high availability. However, switch over operations provide nondisruptive resiliency in a fore configuration. Each site has an AA pair offering local level and cluster level data protection And eight configuration consist of 2AA pairs per site, providing data protection on both local and cluster levels metro cluster. IP supports near data aggregates for non-redundant mirroring requirements.
2 node config - uses a single node cluster at each site, protects data at cluster level
4 node config - uses a 2 node cluster at each site, protects data on a local level and a cluster level
8 node config - uses a 4-node cluster at each site, protects data on a local level and a cluster level
MetroCluster Business Continuity Solution
Metro cluster is automatically enabled for symmetrical, switchover and switchback can switch over to the other when a disaster happens at one site for Metro cluster functionality, testing or planned maintenance and negotiated, switchover or NSO clearly which is one cluster to its partner cluster metro cluster switch over all allows quick service resumption after a disaster, moving storage and client access access to the remote site and unplanned switchover, or USO occurs when a site goes downmetro cluster FC can trigger an automatic unplanned switchover or a USO if a sitewide controller fails, but it can be disabled in Metro cluster IP The on tap, mediator service, facilitates, mediator, assisted automatic, unplanned, switchover, or mouse by detecting failures
Comparison, snap, mirror, synchronous, snap, mirror, business, continuity, and Metro cluster software
Manual application failover: Granular application protection, manual app failover. Hardware, network, and storage agnostic.
Support for IP-only transport. Support for application dev test with secondary.
use snapmirror synchronous
Automated Continous
Availability: Complete cluster protection. Automated failover. High performance and scale. Support for FC/IP
** use MetroCluster Software **
Automatic App
Failover: Granular application protection, automatic app failover. Storage, network, hardware agnostic, support for
ISCSI and FC protocol. With snap mirror, synchronous, automatic failover is not supported.
use Snapmirror Business Continuity
KCQ1: which two statements about Snapmirror Business Continuity are true (choose two)
Is hardware, network and storage agnostic
Provides granular application protection
Provides complete cluster protection
Provides support for iSCSI and FC protocol
Provides granular application protection
Provides support for iSCSI and FC protocol
KCQ2 - which two features are provided by the MetroCluster business continuity solution (choose two)
Support for FC or IP transport
Manual or scripted failover
Complete cluster protection
Granular application protection
Support for FC or IP transport
Complete cluster protection
Back up and archive solutions
This module discusses some use cases that enable you to better understand back up and archive challenges in a real world scenario. Your back up and archive challenges might overlap, so you might need more than one solution to meet your requirements review each case in detail and explore various solutions for these challenges Click next to continue
Use case 1- Reducing the amount of secondary storage, retraining snapshot copies longer, compliance and governance
Related purposes
Use case 2 - Enterprise workloads and virtualisation support. Policy driven backup and recovery, copy management
Faster restores
Snapmirror for backup and archive
Snap mirror back up in archive. Snap mirror has different policies that you can apply to meet your various data protection challenges with one policy. You can configure disaster recovery and archiving on the same destination. Volume another policy enables you to vote snapshot copies to meet compliance requirements, you can use a snapshot copy as a local backup to restore all the contents of the volume or to recover individual files or LUNs.
- Asynchronous backup to secondary data Center
- Unified replication: mirror and vault
Creating backups
The backup method that best suits your environment you can schedule automatic backups using policies or manually, create backups for most unstructured data, such as file shares, use, either automatic or manual methods, use scripts, snap or backup software to create snapshot copies. These methods are useful for structure data because they require coordination between the host application and storage system. Regardless of the method you choose. You must ensure consistency for the host or application to ensure consistency for the host or application. Create the snapshot copy on the storage system and resume IO after the snapshot copy is created.
For unstructured data such as file shares either use automatic or manual method.
For structured data use scripts, snapcenter or backup software.
Pause IO for the host, create the snapshot, resume IO