Module 3: Business Continuity Solutions Flashcards
Business Continuity: use cases
This module discusses some use cases that enable you to better understand business continuity challenges in a real world scenario. Your business continuity challenges might overlap, so you might need more than one solution to meet your requirements review each case in detail and explore various solutions for these challenges, click next to continue.
Use Case 1 = non-disruptive for most critical SAN workloads, zero data loss
Use case 2 = SAN and NAS workloads, zero data loss, Non-disruptive operations
Snapmirror Business Continuity
SnapMirror, your business continuity protects your data lens which enables applications to fail over transparent for business continuity in case of a disaster snap mirror business continuity is a business continuity solution for zero recovery point, objective or RPO and new zero recovery time, objective, or RTO snap mirror business continuity gives you flexibility with easy to use application, level, granularity, and automatic failover Snap mirror business continuity uses snap, mirror, synchronous or SMS replication over the IP network to replicate data, high speeds overland or when this approach achieves high data, availability and data replication for business critical applications, such as Oracle and Microsoft sequel, server virtual and physical environments with snap mirror business continuity. You add lens to the consistency group to protect applications that are spread across multiple volumes. The identity should be the same on both primary and secondary clusters. If the Primary system goes off-line, snap, mirror business, continuity, orchestrates, automated failover, which makes manual intervention, unnecessary
Uses SM-S technology for zero RPO, provides application granularity when LUNs are added to Consistency Group. Uses same LUN identity on both sides, provides transparent automatic failover.
ONTAP Mediator
On tap mediator is an application application that is provisioned on a Linux host and a third site on tap, mediator monitors storage systems and cluster and connectivity to verify that clusters are online and healthy on tap mediator establishes a quorum preventing split brain scenarios were no assume they are the soul, surviving member and create multiple Primary notes The on tap, mediator service, maintains a consensus with the Primary cluster serving and replicating to the secondary cluster when the clusters are not synchronised, the cluster with the consensus serves the primary cluster can obtain the consensus on tap mediator by marking. The secondary cluster as capable. The primary cluster is given preference in obtaining the consensus, and the secondary cluster is not aggressive.
Consistency Group
A consistency group is a collection of flex fall volumes that provides right order consistency across multiple volumes for an application workload. A consistency group supports any protocol, such as SAN, NAS or NVME and can be managed via tap system manager, arrest, API consistency group, simplify application, workload management providing crash, consistent or application. Consistent point in time snapshot copies of multiple volumes in regular deployment volumes within a consistency group map to an application instance snap mirror business continuity establishes consistency group relationships between source and destination consistency groups with Primary and mirror consistency groups containing the same number and type of volumes The individual volumes that form the consistency group are referred to as constituents or items. Snap mirror business continuity supports 50 consistency groups and 400 relationships starting from on tap 9.12.1 volume can be added or removed from an existing consistency group to accommodate application deployment changes,
Snapmirror Business Continuity Architecture
[in a snap, mirror, business, continuity architecture, both clusters serve Primary workloads Aiding compliance with periodic service ability regulations. A data protection relationship is established between the source and destination storage systems by adding applications specific lungs from different volumes with a SVM to the consistency group during normal operations, the Primary group synchronously replicates to the mirror group when the primary storage system fails on tap, mediator enable seamless application failover to the mirror group eliminating the need for manual intervention or scripting.
Services primary workload from both clusters.
Adds application-specific LUNs from different volumes within a SVM to a consistency group.
Uses ONTAP mediator to detect failure and enable seamless application failover to the mirror consistency group.
MetroCluster Solution
Metro cluster enables synchronous mirroring the volumes between two storage controllers, providing high availability or HA storage and disaster recovery. A Metro cluster configuration consist of two on tap storage systems, each residing in the same data centre or two different physical locations cluster together metro cluster provides high availability zero data loss and non-disruptive operations or NDA worldwide. Metro cluster supports various and host protocols and is available as metro cluster FC metro cluster IP and stretch metro cluster. A Metro cluster configuration reduces acquisition and ownership cost due to easily manageable architectureMetro cluster capabilities, seamlessly integrate into on tap software without the need for additional licenses. Metro cluster supports net flex group volumes, fabric, technology, and SMDR. As a source. Metro cluster enables smooth transition from a 4FC configuration to a new IP configuration, maintaining workload and data integrity.
MetroCluster features
Provides continuous data availability across data centers with near-zero data loss
Supports SAN and NAS systems (MetroCluster FC, MetroCluster IP)
Requires no additional licensing costs to setup the configuration
Supports NetApp FlexGroup volumes, FabricPool technology and SVM DR (as a source only)
Supports nondisruptive transition from a four-node MetroCluster FC configuration to a MetroCluster IP configuration
Metro Cluster Configurations
Metro cluster configurations include two note for note and eight notes set ups in a two note configuration. Each site has a single note cluster without high availability. However, switch over operations provide nondisruptive resiliency in a fore configuration. Each site has an AA pair offering local level and cluster level data protection And eight configuration consist of 2AA pairs per site, providing data protection on both local and cluster levels metro cluster. IP supports near data aggregates for non-redundant mirroring requirements.
2 node config - uses a single node cluster at each site, protects data at cluster level
4 node config - uses a 2 node cluster at each site, protects data on a local level and a cluster level
8 node config - uses a 4-node cluster at each site, protects data on a local level and a cluster level
MetroCluster Business Continuity Solution
Metro cluster is automatically enabled for symmetrical, switchover and switchback can switch over to the other when a disaster happens at one site for Metro cluster functionality, testing or planned maintenance and negotiated, switchover or NSO clearly which is one cluster to its partner cluster metro cluster switch over all allows quick service resumption after a disaster, moving storage and client access access to the remote site and unplanned switchover, or USO occurs when a site goes downmetro cluster FC can trigger an automatic unplanned switchover or a USO if a sitewide controller fails, but it can be disabled in Metro cluster IP The on tap, mediator service, facilitates, mediator, assisted automatic, unplanned, switchover, or mouse by detecting failures
Comparison, snap, mirror, synchronous, snap, mirror, business, continuity, and Metro cluster software
Manual application failover: Granular application protection, manual app failover. Hardware, network, and storage agnostic.
Support for IP-only transport. Support for application dev test with secondary.
use snapmirror synchronous
Automated Continous
Availability: Complete cluster protection. Automated failover. High performance and scale. Support for FC/IP
** use MetroCluster Software **
Automatic App
Failover: Granular application protection, automatic app failover. Storage, network, hardware agnostic, support for
ISCSI and FC protocol. With snap mirror, synchronous, automatic failover is not supported.
use Snapmirror Business Continuity
KCQ1: which two statements about Snapmirror Business Continuity are true (choose two)
Is hardware, network and storage agnostic
Provides granular application protection
Provides complete cluster protection
Provides support for iSCSI and FC protocol
Provides granular application protection
Provides support for iSCSI and FC protocol
KCQ2 - which two features are provided by the MetroCluster business continuity solution (choose two)
Support for FC or IP transport
Manual or scripted failover
Complete cluster protection
Granular application protection
Support for FC or IP transport
Complete cluster protection
Back up and archive solutions
This module discusses some use cases that enable you to better understand back up and archive challenges in a real world scenario. Your back up and archive challenges might overlap, so you might need more than one solution to meet your requirements review each case in detail and explore various solutions for these challenges Click next to continue
Use case 1- Reducing the amount of secondary storage, retraining snapshot copies longer, compliance and governance
Related purposes
Use case 2 - Enterprise workloads and virtualisation support. Policy driven backup and recovery, copy management
Faster restores
Snapmirror for backup and archive
Snap mirror back up in archive. Snap mirror has different policies that you can apply to meet your various data protection challenges with one policy. You can configure disaster recovery and archiving on the same destination. Volume another policy enables you to vote snapshot copies to meet compliance requirements, you can use a snapshot copy as a local backup to restore all the contents of the volume or to recover individual files or LUNs.
- Asynchronous backup to secondary data Center
- Unified replication: mirror and vault
Creating backups
The backup method that best suits your environment you can schedule automatic backups using policies or manually, create backups for most unstructured data, such as file shares, use, either automatic or manual methods, use scripts, snap or backup software to create snapshot copies. These methods are useful for structure data because they require coordination between the host application and storage system. Regardless of the method you choose. You must ensure consistency for the host or application to ensure consistency for the host or application. Create the snapshot copy on the storage system and resume IO after the snapshot copy is created.
For unstructured data such as file shares either use automatic or manual method.
For structured data use scripts, snapcenter or backup software.
Pause IO for the host, create the snapshot, resume IO
Snapshot copies
A snapshot copy is an efficient, backup image of a volume a snapshot coffee served as a local back up, replacing the need for maintaining multiple copies of important files with snapshot copies. Only changes require additional space, making it more efficient than traditional file copies. These copies can be used to restore individual files lens or the entire volume to a specific point in time snapshot copies can also serve as restore points for rollback, enabling quick and easy recovery to a previous state in case of upgrade, failures or issues.
Efficient backup images of volumes.
Serves as local backup
Only changes require space
Restores files, lun and entire volumes to specific time
Rollback restore points for quick recovery
Snapshot Policy
The snapshot policy defines how snapshot copies are created volumes, inherit the default snapshot policy, unless otherwise specified during creation, you can choose the default or custom schedule define a custom schedule before you create a snapshot policy that uses the custom schedule, the snapshot policy includes the schedule retained copies and snap mirror label in the example, policy and hourly snapshot cop is created, retaining a maximum of eight copies after nine hours. The oldest copy is replaced by the latest snapshot copy You create snapshot policies using that app on tap system manager when a snapshot schedule is applied to copies are named using the schedule type and timestamp on tap software provides default schedules, weekly daily and hourly weekly is created every Sunday at 15 minutes after midnight daily is created every night at 10 minutes after midnight hourly is created every hour at five minutes after the hour snapshot copies for snap mirror relationships follow a specific naming convention, including the relationship ID and stamp. The example, snapshot copy was created for a snap, mirror, baseline copy
Space Reservation
On software, preserves disk block pointers when creating a snapshot copy changes to files are written to new blocks. Where is the snapshot copy retains the original blocks which enables restoring to a previous point in time before the modification snapshot copies, effectively lock the blocks to which they point, the snapshot copy reserve designates a percentage of space for snapshot copies. The default reserve for flexible volumes is 5%. The active file system cannot use reserve space but if exhausted snapshot copies can use active file system.
Snap mirror policies for back up in archive
The replication policy type determines the type of relationship that it supports a baseline transfer under the default vault policy create a snapshot copy of the source, and then transfers, the copy and the date of blocks that the copy references to the destination, volume, unlike snap mirror relationships of all back up does not include older snapshot copies in the baseline. When you use on tap system manager to configure your snap mirror. You use the asynchronous option in the menu.
Snapmirror Scenarios
A) customer requires a solution to create monthly snapshot copies of data over a 20year span to comply with government
accounting regulations - ** customer a can use the snap mirror feature with the policy type of vault to archive data to a
destination volume. You don’t have access to the only volume on the destination in a vault. You only have access to the
snapshot copies you can perform a snap mirror, restore from the destination, volume by choosing a snapshot copy and
replicating again, all the data from the baseline and the snapshot copies to return the data to the Primary storage.**
B) Customer requires a solution to create a replica of the working data in a secondary storage from which to continue to serve
data if a DR occurs at the primary site. - ** as you learned in the previous module snap mirror is the right choice for
customer B snap mirror is the technology for both short term and long-term retention and a volume rep replica and
evolved or archive is achieved using a unified mirrorball policy unified replication brings together, the mirror capabilities
and capabilities of snap mirror technology for disaster recovery, and archiving to the same destination to protect mission
critical business data**
Dump an SM tape using NDMP on tap software support, tape back up and restore through NDMP
NMP enables direct tape backups from storage system, conserving network band with on tap. Software supports dump and SM tape engines for tape back up the dump. Backs up files and directories, including access control list. ASL information tape is a disaster, recovery, solution and backs. Data blocks SM tape performs volume back up but not at the QT or sub level dump. Tape support baseline differential and incremental backups. NMP is not supported on net up on tap flex group volumes due to difficulties distinguishing between a file and remote hard link which is a file system link that points to a file that is located on a different system or storage device.
ONTAP supports
DUMP - backups up files and directories, including access control list (ACL) information
SMTape - backups up data blocks. Performs volume backups to tape
** FlexGroup Volumes do not support NDMP**
Selecting a tape backup engine
To perform take back of and restore operations. Be aware of the use cases for the SM tape and dump back up engines and then select the appropriate backup for your situation.
Dump - Direct Access Recovery (DAR) of files and directories.
Backup of a subset of subdirectories or files in a specific path
Excluding specific files and directories during backups
Preserving backups for long durations
SMTape - Disaster Recovery Solution
Preserving of deduplicaion savings and deduplication setttings on the backed-up data during a restore operation
Backup of large volumes
Snap Center
Snap is a centralised and scalable solution that provides application, consistent data protection for applications, databases, post file systems and virtual machines or VM running on on tap systems anywhere in the hybrid cloud, the solution uses net app, technologies like snapshot, snap, restore flex, clone and snap mirror for fast space efficient and application consistent backups Snap enables rapid restore an application consistent recovery. The solution consists of a server and lightweight plug-ins that can be deployed automatically to remote application hosts back up verification and cloning can be scheduled and monitored. The snap plug-in for VMware sphere is a separate set up on a Linux host for protecting virtual disc space, databases or applications. Snap, supports fan of architecture, but not cascading.
- provides application-consistent data protection in the hybrid cloud
- uses snapshot, snap restore, FlexClone, and snapmirror
- enables rapid restore and application-consistent recovery
- consists of a server and lightweight plug-ins that can be deployed automatically to remote app hosts
- supports fan-out architecture but not cascading
SnapCenter concepts
Here are several key concepts to help you understand data backup workflow in snap centre. Snap has a multitiered architecture with the central management server called snap server. You add host to snap centre for resource protection, lightweight, snap, plug-ins, enable application, consistent, backup, restore, and cloning resources include applications, databases and host file systems that are managed with snap resource groups, logically group, similar resources for easy management application, specific backup policies, define frequency, retention, and other data protection characteristics. The protection feature applies snap, backup policies to resources or resource Groupsrun is credential specify the credentials for various snap operations