Module 04 - Virtualized Data Center - Storage Flashcards
Process of masking the underlying complexity of physical storage resources and presenting the logical view of these resources to compute systems.
Storage Virtualization
(4.4)
Benefits of Storage Virtualization
1) Adds or removes storage without any downtime, 2)Increases storage utilization, reducing TCO, 3) Provides non-disruptive data migration between storage devices, 4) Supports heterogeneous, multi-vendor storage platforms, 5) Simplifies storage management
(4. 5)
Example of storage virtualization at the compute layer
Storage provisioning for VMs
(4.6)
Examples of storage virtualization at the network layer
Block-level virtualizationFile-level virtualization
(4.6)
Examples of storage virtualization at the storage layer
Virtual provisioningAutomated storage tiering
(4.6)
How are VMs stored?
As a set of files on storage space available to the hypervisor
(4.8)
How does a virtual disk appear to a VM?
As a local physical disk drive. The hypervisor may access a FC storage device, IP storage devices such as iSCSI, and NAS devices
(4.8)
Two file systems used by the hypervisor to manage the VM files
1) Virtual Machine File System (VMFS) - the hypervisor’s native file system, 2) Network File System (NFS) such as NAS file system
(4. 9)
Characteristics of VMFS
1) Cluster File System, 2) can be access by multiple compute systems simultaneously, 3) Provides on-disk locking
(4. 10)
What are methods to expand a VMFS
1) Expand VMFS dynamically on the volume partition on which it is located, 2) Add one or more LUNs to the source VMFS volume.
(4. 11)
Enables VM to directly access LUNs in a storage system.
Raw Device Mapping (RDM)
(4.12)
When is Raw Device Mapping (RDM) useful?
When an application running on the VMs are required to know the physical characteristics of the storage device.
(4.12)
Benefits of Raw Device Mapping
1) Good when huge volume of data on LUN is not practical to maintain on a virtual disk, 2) Enables clustering the VM with the physical machine
(4. 12)
The ______ protocol is ussed by the hypervisor to access NAS file system.
NFS
(4.13)
True or False: Hypervisors come with NFS client software for NFS server (NAS) access.
True
(4.13)
______ volumes are created on a NAS device, provide storage to VM and are accessed by multiple compute systems simultaneously.
NFS
(4.13)
Embeds storage virtualization intelligence at the network layer.
Network-based virtualization
(4.15)
How network-based storage virtualization is applied
1) Block-level (SAN), 2) File-level (NAS)
(4. 15)
Creates an abstraction layer at the SAN, between physical storage resources and volumes presented to compute.
Block-level Storage Virtualization
(4.16)
True or False: Block-level storage virtualization enables the combination of several LUNs from one or more arrays into a single virtual volume before presenting it to the compute system.
True
(4.16)
Encapsulates physical storage devices and applies layers of logical abstraction to present virtual volumes to compute layer.
Storage Virtualization Appliance
(4.17)
Mechanisms used by a virtualization appliance to divide storage volumes. May be all or part of the underlying storage volume. (EMC-specific term)
Extents
(4.17)
Enables clients to access files using logical names which are independent of the actual physical location.Maps logical path of a file to the physical path names.Simplifies access to files: clients no longer need to have multiple mount points to access data located on different NAS devices.
File-Level Storage Virtualization - Global Namespace
(4.18)
Provides an abstraction layer in the NAS / File servers environment.
File-level Storage Virtualization
(4.18)
Ability to present a LUN to a compute system with MORE capacity than what is physically allocated to the LUN.
Virtual Provisioning (Thin Provisioning)
(4.21)
Virtual Storage Provisioning can be implemented at what layers?
1) Storage Layer, 2) Compute Layer
(4. 21)
Basic benefit of virtual provisioning
Better storage capacity utilization
(4.21)
Logical device where the physical storage need not be completely allocated at time of creation.
Thin LUN
(4.23)
Minimum amount of physical storage allocated at a time to a Thin LUN from a Thin Pool
Thin LUN Extent
(4.23)
Collection of physical drives that provide physical storage allocated to the Thin LUN.
Thin Pool
(4.23)
How is a Thin Pool created?
By specifying a set of drives and a RAID type for that pool.Thin LUNs are then created out of that pool (similar to traditional LUN created on a RAID group).
(4.24)
Balances the used capacity of physical disk drives over the entire pool when new disk drives are added.Restripes data across all disk drives.
Thin Pool Rebalancing
(4.25)
How is virtual provisioning done at the compute level?
Hypervisor performs virtual provisioning to create virtual disks for VMs.
(4.26)
Two types of virtual disks offered by the hypervisor.
1) Thick disk 2)Thin disk
(4. 26)
Virtual Provisioning Benefits
1) Reduces administrative overhead.2) Improves capacity utilization.3) Reduces cost.4) Reduces downtime.
(4. 27)
Virtual Provisioning Best Practices
1) Drives in Thin pool should have the same RPM. 2) Drives in the Thin pool should be of the same size. 3) Provision Thin LUNs for applications that can tolerate some variation in performance.
(4. 28)
Establishing a hierarchy of storage type, and identifying the candidate data to relocate to the appropriate storage type to meet service level requirements at minimal cost.
Storage Tiering
(4.29)
Automates the storage tiering process.Enables the non-disruptive data movement between tiers.Improves application performance at the same cost or provides the same performance at a lower cost.Configures data movement within a storage array (intra-array) or between storage arrays (inter-array).
Automated Storage Tiering
(4.30)
Automates the storage tiering process within an array.Enables efficient use of SSDs and SATA drive technologies.Performs data movements between tiers at sub-LUN level.Employs cache tiering to improve application performance further.
Intra-Array Automated Storage Tiering
(4.31)
Enables a LUN to be broken down into smaller segments and tiered at that level. Different levels have different performance characteristics.
Sub-LUN Tiering
(4.32)
True or False: Movement of data with much finer granularity (e.g., 8MB) greatly enhances the value proposition of automated storage tiering.
True
(4.32)
Three major building blocks of automated storage tiering
1) Storage Type, 2)Storage Groups, 3)Policies
(4. 33)
Creates a large capacity secondary cache using SSDs. Enables tiering between DRAM cache and SSD drives (secondary cache).Most reads are now served directly from high performance tiered cache.
Cache Tiering
(4.34)
Automates the identification of active or inactive data to relocate them to different performance / capacity tiers between the arrays.
Inter-Array Automated Storage Tiering
(4.35)
EMC product that provides local, metro and Geo unified storage access.
EMC VPLEX
(4.37)
EMC Product that provides automated storage tiering for Thin pools. Supports data movement at sub-LUN level. Moves data based on user-defined policies and application performance needs. Data movement is automatic and non-disruptive.
EMC Symmetrix VMAX - FAST VP
(4.38)