CIS Test 2 Flashcards
Created from a physical machine or cluster.
Resource Pool
A logical abstraction of aggregated physical resources that are managed centrally.
Resource Pool
What must be done to resources in order to manage them centrally?
Resources must be POOLED to manage them centrally.
Goals of Resource Management
Controls utilization of resources. Prevents VMs from monopolizing resources. Allocates resources based on relative priority of VMs.
Process of allocating resources from physical machine or clustered physical machines to virtual machines (VMs) to optimize the utilization of resources.
Resource Management
Provides mouse, keyboard, and screen functionality.Sends power changes (on/off) to the virtual machine (VM).Allows access to BIOS of the VM.Typically used for virtual hardware configuration and troubleshooting issues.
Virtual Machine Console
Makes a virtual machine portable across physical machines.
Standardized hardware
True or False: All virtual machines have standardized hardware.
TRUE
Enables storing VM files on a remote file server (NAS device).Client built into hypervisor.
Network File System (NFS)
Cluster file system that allows multiple physical machines to perform read/write on the same storage device concurrently.Deployed on Fiber Channel (FC) and iSCSI storage apart from local storage.
Virtual Machine File System (VMFS)
Two file systems supported by hypervisors
Virtual Machine File System (VMFS) - Network File System (NFS)
VM File Set
Configuration file - Virtual disk files - Virtual BIOS file - Virtual machine swap file - Log file
Virtual Machine from a hypervisor’s perspective
A discrete set of files
Virtual Machine from a user’s perspective
A logical compute system
Hardware Assisted Virtualization
Achieved by using hypervisor-aware CPU to handle privileged instructions. Reduces virtualization overhead due to full and paravirtualization. CPU and memory virtualization support is provided in hardware. Enabled by AMD-V and Intel VT technologies in the x86 processor architecture.
Product examples that implement paravirtualization
XenKVM
Only possible in an open operating system environment
Paravirtualization
Not possible in closed source OSs such as Microsoft Windows.
Paravirtualization
Guest OS knows that is is virtualized.Guest OS runs in Ring 0.Modified guest OS kernel is used, such as Linux and OpenBSD.Unmodified guest OSs, such as Microsoft Windows, are NOT supported.
Paravirtualization
Product examples of hypervisors that implement the full virtualization technique
VMware ESX/ESXi - Microsoft Hyper-V (running in a server core environment, not as a Windows application)
VMM runs in the privileged Ring 0.VMM decouples guest OS from the underlying physical hardware.Each VM is assigned a VMM.Guest OS is NOT aware of being virtualized.
Full Virtualization
Three techniques for handling privileged instructions to virtualize the CPU on x86 architectures
1) Full Virtualization using Binary Translation (BT) 2) Paravirtualization (OS-assisted Virtualization) 3) Hardware assisted Virtualization
Where most user applications run in x86 architecture
Ring 3 (least privileged)
Where OS runs in x86 architecture
Ring 0
Most privileged ring in x86 architecture
Ring 0
Four levels of privilege of the x86 architecture
Ring 0 - Ring 1 - Ring 2 - Ring 3
How is a traditional or typical OS designed?
To run on a bare-metal hardware platform and to fully own that hardware.
Challenges of virtualizing x86 hardware
Requires placing the virtualization layer below the OS layer. Is difficult to capture and translate privileged OS instructions at runtime.
Benefits of Compute Virtualization
Server Consolidation - Isolation - Encapsulation - Hardware Independence - Reduced Cost
Type of hypervisor most predominantly used within the Virtualized Data Center (VDC)
Type 1 Bare-Metal Hypervisor
Primary component of virtualization that enables compute system partitioning (i.e., partitioning of CPU and memory).
Hypervisor
Type of hypervisor installed and run as an application on top of an OS.Supports broadest range of hardware configurations.
Type 2 Hosted Hypervisor
Type of hypervisor that is directly installed on the x86 based hardware.Has direct access to the hardware resources.
Type 1 Bare-metal hypervisor
Two types of hypervisors
Type 1: Bare-metal hypervisor Type 2: Hosted hypervisor
What happens when a VM starts running?
Control is transferred to the Virtual Machine Monitor (VMM), which subsequently begins executing instructions from the VM.
Assigned to each VM and has a share of the CPU, memory, and I/O devices to successfully run the VM.
Virtual Machine Monitor (VMM)
Responsible for actually executing commands on the CPUs and performing Binary Translation.
Virtual Machine Monitor (VMM)
Abstracts hardware to appear as a physical machine with its own CPU, memory, and I/O devices.
Virtual Machine Monitor (VMM)
Designed specifically to support multiple virtual machines and to provide core functionalities, such as resource scheduling, I/O stacks, etc.
Hypervisor kernel
Functionality of a hypervisor kernel
Same functionality as other OSs, such as process creation, file system management, and process scheduling.
Two components of a hypervisor
1) Kernel 2) Virtual Machine Monitor (VMM)
Software that allows multiple OSs to run concurrently on a physical machine and to interact directly with the physical hardware.
Hypervisor
Technique of masking or abstracting the physical compute hardware and enabling multiple OSs to run concurrently on a single or clustered physical machine(s).
Compute Virtualization
Another name for the virtualization layer
Hypervisor
Layer which resides between hardware and VMs
Virtualization Layer
Logical entity that looks and behaves like a physical machine
Virtual Machine (VM)
First step towards building a cloud infrastructure
Virtualization
Which is the primary function of a hypervisor?a. Allows multiple OSs to run concurrently on a physical machine.b. Allows multiple OSs to run concurrently on a VM.c. Allows multiple file systems to run concurrently on a VM
a. Allows multiple OSs to run concurrently on a physical machine.
Which VMFS feature ensures that a VM is not powered on by multiple compute systems at the same time?a. On-power lockb. On-VM lockc. On-disk lockd. On-compute lock
c. On-disk lock
Which technology enables a physical CPU to appear as two or more logical CPUs?a. Hyper-threadingb. Multi-corec. Load balancingd. Ballooning
a. Hyper-threading
Which parameter determines the maximum amount of resource that a VM can consume?a. Shareb. Limitc. Reservationd. Priority
b. Limit
What is stored in a VM log file?a. Information of the VM’s activitiesb. VM’s RAM contentsc. VM’s BIOS informationd. Information of the VM’s configuration
a. Information of the VM’s activities
Contained in the Virtual Machine configuration file
Number of CPUs, memory, number and type of network adapters, number and type of disks3.17
How does a large cache in a storage array improve performance?
A large cache in a storage array improves performance by retaining frequently accessed data for a longer period of time.
True or False: A policy may be applied to one or more previously defined storage groups.
TRUE
True or False: Tiering at the sub-LUN level moves more active data to faster drives and less active data to slower drives.
TRUE
True or False: Traditional storage tiering moves an entire LUN from one tier of storage to another.
TRUE
Configures data movement within a storage array (intra-array) or between storage arrays (inter-array).
Automated Storage Tiering
Each storage tier is optimized for what?
Each storage tier is optimized for a specific characteristic, such as performance, availability, or cost.
Thin LUNs are most appropriate for what type of applications?
Applications that can tolerate some variation in performance.
True or False: Adding drives to a Thin pool increases the available shared capacity for all the Thin LUNs in the pool.
TRUE
What happens to allocated capacity when Thin LUNs are destroyed?
Allocated capacity is reclaimed by the Thin pool when Thin LUNs are destroyed.
From the operating system’s perspective, Thin LUNs appear as what?
As traditional LUNs
Provides more efficient utilization of storage reducing the amount of allocated, but unused physical storage.
Virtual Provisioning (Thin Provisioning)
Provides an abstraction layer, enabling clients to use a logical name that is independent of the actual physical location.
Namespace
How does global namespace in a file-level storage virtualization appliance simplify access to files?
Clients no longer need to have multiple points to access data located on different NAS devices.
Maps logical path of a file to the physical path names.
Global namespace
Enables clients to access files using logical names which are independent of the actual physical location.
Global namespace
Used to map the logical path of a file to the physical path name.
Global Namespace
File-level virtualization simplifies what?
File mobility
Implemented using global namespace.
File-level Storage Virtualization
Enables movement of files between NAS systems without impacting client access.
File-level Storage Virtualization
Eliminates dependencies between the file and its location.
File-level Storage Virtualization
How does a virtualization appliance handle extents?
The virtualization appliance aggregates extents and applies RAID protection to create virtual volumes.
True or False: Extents may be all of part of the underlying storage volume.
TRUE
Available capacity on a storage volume is used to create what?
Extents and virtual volumes
A device or LUN on an attached storage system that is visible to the virtualization appliance.
Storage volume
Takes a single large LUN from an array, slices it into smaller virtual volumes, and presents these volumes to the compute systems.
Block-level storage virtualization
What does block-level storage virtualization support?
1) Dynamic increase of storage volumes. 2) Consolidation of heterogeneous storage arrays. 3) Transparent volume access
Uses virtualization appliance to perform mapping operation.
Block-level Storage Virtualization
Makes underlying storage infrastructure transparent to compute.
Block-level Storage Virtualization
Enables significant cost and resource optimization.
Block-level Storage Virtualization
Creates an abstraction layer at the SAN, between physical storage resources and volumes presented to compute.
Block-level Storage Virtualization
Where is virtualization applied in NAS?
At the file level.
Where is virtualization applied in a SAN?
At the block level
True or False: Network-based storage virtualization can be implemented in both SAN and NAS environments.
TRUE
What manages an NFS volume?
The NFS volume is managed entirely by the NAS system.
When is RDM recommended?
1) When there is a large amount of data on the LUN in the storage system. 2) When it is not practical to move the data onto a virtual disk. 3) When clustering a virtual machine with a physical machine.
Contains a symbolic link on VMFS volume to the LUN; acts as a proxy that allows direct access to a LUN.
Raw Device Mapping
True or False: VMFS can be dynamically expanded without disrupting running VMs.
TRUE
Used for providing storage space for creating VMFS to store virtual machine files.
VMFS volume
What does VMFS provide to ensure that the same virtual machine is not powered on by multiple compute systems at the same time?
On-disk Locking
Hypervisor’s native file system to manage VM files
Virtual Machine File System (VMFS)
Places very active parts of a LUN on high-performing Enterprise Flash Drives (EFDs).Places less active parts of a LUN on higher-capacity, more cost-effective SATA drives.Moves data at the extent group level.
Data movement at the sub-LUN level by EMC Symmetrix VMAX - FAST VP
Granularity with with FAST VP monitors data access
Each 7,680 KB region of storage
EMC FAST VP
Proactively monitors workloads at sub-LUN level in order to identify “busy” data that would benefit from being moved to higher-performing enterprise flash drives. Also identifies less “busy” data that could be moved to higher-capacity drives, without affecting the existing performance.
Automates the identification of Thin LUN extents for purposes of relocating application data across different performance / capacity tiers within an array.
EMC Symmetrix VMAX - FAST VP
Product that provides automated storage tiering for Thin pools.Supports data movement at sub-LUN level.Moves data based on user-defined policies and application performance needs.Data movement is automatic and non-disruptive.
EMC Symmetrix VMAX - FAST VP
True or False: EMC VPLEX has a unique clustering architecture that allows VMs at multiple data centers to have read/write access to shared block storage devices.
TRUE
Where does EMC VPLEX reside?
Between the compute and heterogeneous storage systems.
Adds support for data mobility and access over extended asynchronous distances (beyond 100 km).
EMC VPLEX Geo
Only platform (as of late 2012) that delivers both local and distributed federation in a storage virtualization context.
EMC VPLEX
Three deployment models of EMC VPLEX
1) VPLEX Local 2) VPLEX Metro 3) VPLEX Geo
EMC VPLEX
Next generation solution for non-disruptive data mobility and information access within, across and between VDCs. Allows VMs at multiple VDCs to access the shared block storage device. Resides between compute and heterogeneous storage systems, virtualizing data movement. Offers three deployment models.
Automates the identification of active or inactive data to relocate them to different performance / capacity tiers between the arrays.
Inter-Array Automated Storage Tiering
Benefits of Cache Tiering
1) Provides excellent performance benefit during peak workload. 2) Non-disruptive and transparent to applications.
Creates a large capacity secondary cache using SSDs.Enables tiering between DRAM cache and SSD drives (secondary cache).Most reads are now served directly from high performance tiered cache.
Cache Tiering
Manage data movement across storage types in an automated storage tiering context.
Policies (a.k.a., tier usage rules)
A logical collection of LUNs that are to be managed together.
Storage Groups
Combination of drive technology (SSD, FC, or SATA) and a RAID protection type.
Storage Type
Three major building blocks of automated storage tiering
1) Storage Type 2) Storage Groups 3) Policies
True or False: Movement of data with much finer granularity (e.g., 8MB) greatly enhances the value proposition of automated storage tiering.
TRUE
Enables a LUN to be broken down into smaller segments and tiered at that level.
Sub-LUN Tiering
Automates the storage tiering process within an array.Enables efficient use of SSDs and SATA drive technologies.Performs data movements between tiers at sub-LUN level.Employs cache tiering to improve application performance further.
Intra-Array Automated Storage Tiering
Automated Storage Tiering
Automates the storage tiering process. Enables the non-disruptive data movement between tiers. Improves application performance at the same cost or provides the same performance at a lower cost. Configures data movement within a storage array (intra-array) or between storage arrays (inter-array).
In a storage tiering context, policies may be based on what factors?
File type, frequency of access, etc.
Efficient storage tiering requires implementation of what?
Policies
Storage Tiering Implementation Types
Manual storage tiering - Automated storage tiering
Establishing a hierarchy of storage type, and identifying the candidate data to relocate to the appropriate storage type to meet service level requirements at minimal cost.
Storage Tiering
True or False: For applications demanding higher service levels, traditional LUNs on RAID groups are a more suitable choice than virtual provisioning.
TRUE
Virtual Provisioning Best Practices
1) Drives in Thin pool should have the same RPM. 2) Drives in the Thin pool should be of the same size. 3) Provision Thin LUNs for applications that can tolerate some variation in performance.
Virtual Provisioning Benefits
1) Reduces administrative overhead. 2) Improves capacity utilization. 3) Reduces cost. 4) Reduces downtime.
Thin Disk Provisioning
Hypervisor allocates storage space to the virtual disk only when the VM requires storage space. Eliminates the allocated, but unused storage capacity at the virtual disk. Eliminates the need to overprovision virtual disks.
Thick disk provisioning
Entire provisioned space is committed to the virtual disk
Two options for provisioning storage to virtual disk offered by the hypervisor
1) Provisioning thick disk 2) Provisioning thin disk
How is virtual provisioning done at the compute level?
Hypervisor performs virtual provisioning to create virtual disks for VMs.
Benefit of Thin Pool Rebalancing
Enables spreading out the data equally on all the physical disk drives within the Thin Pool, ensuring that the used capacity of each disk drive is uniform across the pool.
Restripes data across all the disk drives (both existing and new disk drives) in the thin pool.
Thin Pool Rebalancing
Provides the ability to automatically rebalance allocated extents on physical disk drives over the entire pool when new drives are added to the pool.
Thin Pool Rebalancing
Balances the used capacity of physical disk drives over the entire pool when new disk drives are added.Restripes data across all disk drives.
Thin Pool Rebalancing
How is a Thin Pool created?
By specifying a set of drives and a RAID type for that pool. Thin LUNs are then created out of that pool (similar to traditional LUN created on a RAID group).
True or False: Drives can be added to a Thin Pool while the pool is being used in production.
TRUE
True or False: Multiple thin pools may be created within a storage array.
TRUE
Collection of physical drives that provide the actual physical storage used by Thin LUNs.Can be expanded dynamically.
Thin Pool
From what is physical storage allocated to the Thin LUN?
Thin Pool
Minimum amount of physical storage allocated at a time to a Thin LUN from a Thin Pool
Thin LUN Extent
Logical device where the physical storage need not be completely allocated at the time of creation.Seen by the OS as a traditional LUN.Best suited for environments where space efficiency is paramount.
Thin LUN
Basic benefit of virtual provisioning
Better storage capacity utilization
Ability to present a logical unit (Thin LUN) to a compute system, with MORE capacity than what is physically allocated to the LUN on the storage array.
Virtual Provisioning
Capacity-on-demand from a shared storage pool, called Thin pool.Physical storage is allocated only when the compute requires it.
Virtual Provisioning (Thin Provisioning)
Where Virtual (Thin) provisioning may be implemented
Storage layer - Compute layer: virtual provisioning for virtual disk
Ability to present a LUN to a compute system with MORE capacity than what is physically allocated to the LUN.
Virtual Provisioning (Thin Provisioning)
Benefits of global namespace
By bringing multiple file systems under a single namespace, global namespace provides a single view of the directories and files. Provides administrators a single control point for managing files.
Provides an abstraction layer in the NAS / File servers environment.
File-level Storage Virtualization
File-Level Storage Virtualization - Global Namespace
Enables clients to access files using logical names which are independent of the actual physical location. Maps logical path of a file to the physical path names. Simplifies access to files: clients no longer need to have multiple mount points to access data located on different NAS devices.
Mechanisms used by a virtualization appliance to divide storage volumes.May be all or part of the underlying storage volume.Aggregated by the virtualization appliance and subjected to RAID protection to create virtual volumes.Vendor-specific terminology
Extents
Encapsulates physical storage devices and applies layers of logical abstraction to create virtual volumes.
Virtualization appliance
True or False: Block-level storage virtualization enables the combination of several LUNs from one or more arrays into a single virtual volume before presenting it to the compute system.
TRUE
Role of the virtualization appliance in block-level storage virtualization
Performs mapping between the virtual volume and the LUNs on the array.
Block-level Storage Virtualization
Creates an abstraction layer at the SAN, between physical storage resources and volumes presented to compute. Uses virtualization appliance to perform mapping operation. Makes underlying storage infrastructure transparent to compute. Enables significant cost and resource optimization.
Facilitates an Information Lifecycle Management (ILM) strategy
Deploying block-level storage virtualization in a heterogeneous arrays environment.
Enables non-disruptive data migration between arrays.
Network-based virtualization
Provides ability to pool heterogeneous storage resources.Performs non-disruptive data migration.Manages a pool of storage resources from a single management interface.
Network-based virtualization
How network-based storage virtualization is applied
Block-level (SAN) - File-level (NAS)
Embeds storage virtualization intelligence at the network layer.
Network-based virtualization
Created on a NAS device.Provides storage to VM.Accessed by multiple compute systems simultaneously.
NFS Volumes
True or False: Hypervisors come with NFS client software for NFS server (NAS) access.
TRUE
Used by the hypervisor to access the NAS file system
NFS protocol
In Raw Device Mapping, what file on the VMFS volume is used?
Mapping File
Benefits of Raw Device Mapping
1) Provides solution when huge volume of data on LUN is not practical to move onto a virtual disk. 2) Enables clustering the VM with the physical machine.