Configuring Software-Defined Storage. Flashcards
Benefits of NFS 4.1.
NFS 4.1 uses server-side locking, whereas NFS 3 uses a proprietary client-side locking. This makes the two protocols incompatible with each other.
NFS v4.1 Multipathing and Load Balancing.
In NFS 3, there is no provision for load balancing or multipathing because there is no way to configure more than one server address.
As shown in Figure 8-2, the NFS 4.1 configuration wizard makes it easy to configure multiple addresses, which you can then use to configure multipathing.
You simply add addresses with a comma between them and click the “+” sign.
The wizard then indicates clearly the addresses that are being added, as shown in Figure 8-3.
Additional Security in NFS 4.1 Provided by Kerberos Authentication.
In NFS 3, authentication at the NFS server is provided by giving the root account access to the share. As you may remember from Chapter 7, “Connecting Shared Storage Devices to vSphere,” you even had to turn off the inherent “root squash” protection for that share.
This is an effective method, but it certainly is not the most secure way to handle the connection.
In NFS 4.1, you can still use root account and the traditional no_root_squash setting, but you can also define a specific user on each host using the command-line configuration of esxsfg-nas -U -v 4.1, as associated with an account that you have created in your Active Directory.
You should use the same Active Directory user for all hosts that are going to be associated with each other.
If two ESXi hosts in the same environment have different users, vMotion could fail.
In addition, each ESXi host should be joined to the Active Directory domain, and Kerberos should be enabled in the datastore configuration wizard, as shown in Figure 8-4.
Virtual SAN Hardware Requirements.
Virtual SAN (VSAN) is a vSphere cluster setting that provides a distributed layer of software that runs natively on all the hosts in the cluster and creates a shared storage area that all the hosts can use.
It requires a host cluster of a minimum of 3 hosts and can be used with clusters containing up to the current maximum of 64 hosts.
VSAN is relatively easy to configure and provides many benefits with regard to your vSphere storage options.
VSAN is a software-defined approach that turns otherwise wasted local storage volumes on hosts into an aggregated shared storage location that supports HA, DRS, and so on.
However, to participate in a VSAN cluster, your hosts must have the correct hardware configuration.
Table 8-2:
Cache - One SAS or SATA SSD or PCIe flash. Cache devices must not be formatted.
VM Data Storage - Either one magnetic or one SAS or SATA SSD or PCIe flash device.
Storage Controllers - SAS, SATA, or RAID is passthrough or RAID 0 mode.
In general, you need two disks; both can be SSD, or one can be magnetic and the other SSD.
One disk will be used for storage capacity while the other will be used only for read caching and write buffering to improve performance.
RAM: Each host should contain a minimum of 32 GB of memory. This will accommodate the maximum configuration of five disk groups (capacity and caching) and seven devices per group, per host.
CPU: Finally, you should consider that VSAN will place a load on the CPU of the host. The actual load will vary depending on your configuration and use of the VSAN.
The additional load should not be more than 10 percent of current CPU load on the host.
It should also be mentioned that hosts in the cluster that do not participate in the VSAN can still benefit from storing associated VM files in the VSAN.
Virtual SAN Network Requirements.
Table 8-3:
Host Bandwidth - Dedicated 1 Gbps for hybrid configs.
- Dedicated or shared 10 Gbps for all-
flash configs.
Connection between hosts - Host must be part of the
VSAN cluster to use VSAN
resources.
Host Network - Hosts must be connected to the same
layer 2 network.
Multicast - Must be enabled on all switches/routers that
will handle VSAN traffic.
IPv4/IPv6 Support - Only IPv4 is supported.
Use Cases for VSAN Configurations.
In essence, VSAN provides the same opportunity for storage management that the vSphere provides for compute resource management—software-based control with a “single pane of glass.” For a business that is experiencing tremendous growth, VSAN can add storage capacity every time the business purchases a new host.
In addition, VSAN removes a layer of complexity associated with creating partitions and logical unit numbers (LUNs) that may or may not be used, depending on what transpires for the business in the future.
Instead, each vmdk and snapshot can be individually controlled for redundancy and performance within the same aggregated datastore.
It’s truly a new and different way of looking at storage that will begin to transform both server and virtual desktop interface (VDI) environments in the years to come.
Configuring and Managing Virtual Volumes (VVOLs).
Much like VSAN, VVOLs provide a software-based policy management solution.
The difference is that this storage solution can extend well beyond the local disk capacity of your hosts.
VVOLs allow you to do away with Gold, Silver, and Bronze storage type “guessing games” that might cause you to overprovision some levels of storage while underprovisioning others.
With VVOLs, the right level of storage can be provisioned automatically when each VM is created.
In the long run, this also saves time because you (or your storage admin) will not have to create the partitions and the LUNs that go with the traditional storage guessing games.
The general steps required to configure VVOLs are as follows:
- Register storage providers for virtual volumes.
- Create a virtual datastore.
- Review and manage protocol endpoints.
- Optionally, modify multipathing policies.
Configuring and Managing Virtual Volumes (VVOLs):
Registering Storage Providers for Virtual Volumes.
Third-party storage vendors provide software that works through VMware APIs for Storage Awareness (VASA).
This software is referred to as Storage Provider (not the vendor).
Your VVOLs will use this software to provide communication between the vSphere and the storage.
The storage characteristics appear in the VM Storage Policies interface so you can use them to create storage policies for the VMs.
These policies can then be enforced to provide for the redundancy of the VM files and their performance characteristics.
To use VVOLs, you must first register these storage providers.
You can register a new Storage Provider on the Manage/Storage Providers tab of the vCenter Server in your vSphere Web Client, as shown in Figure 8-5.
The credentials that you will use to authenticate to a specific provider URL can be obtained from your storage vendor or your storage administrator.
Configuring and Managing Virtual Volumes (VVOLs):
Create a Virtual Datastore.
After you have registered the Storage Providers to be used with your VVOLs, you then need to create a datastore that will represent the logical connection to the physical volumes that provide the storage.
You begin to create a new VVOL in much the same way that you create any other datastore, by right-clicking your data center in Datastores view, then clicking Storage and then New Datastore, as shown in Figure 8-6.
You should then select Next and then VVOL, as shown in Figure 8-7.
Then it’s just a matter of associating the backing storage container to your new datastore, as shown in Figure 8-8.
Configuring and Managing Virtual Volumes (VVOLs):
Review and Manage Protocol Endpoints.
Much like iSCSI uses targets or storage processors to provide a connection of the host to the underlying storage, the VVOL system uses an entity called a protocol endpoint (PE).
Protocol endpoints are exported, along with their associated storage containers, by the storage system through the storage provider software.
They become visible in the vSphere Web Client after you map a storage container to a virtual datastore.
You can view and modify the protocol endpoints as needed by clicking the Manage and then Storage tabs of the host, as shown in Figure 8-9.