Connecting Shared Storage Devices to vSphere. Flashcards

1
Q

Identifying Storage Adapters and Devices.

A

Four main types of storage adapters:

  1. Fibre Channel
  2. Fibre Channel over Ethernet (FCoE)
  3. iSCSI
  4. Network-attached storage (NAS)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Fibre Channel.

A

Generally speaking, Fibre Channel is a technology used primarily for storage-area networking (SAN).
Can use fiber-optic cable or copper cable.
At the time of this publication, Fibre Channel has a lower overhead than TCP/IP and is offered with speeds of 1, 2, 4, 8, 10, and 20 Gbps.
The main advantages of Fibre Channel are its flexibility and the fact that it does not put a load on the Ethernet network.
Its chief disadvantage is cost; Fibre Channel implementations often cost considerably more than other options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

FCOE

A

Fibre Channel over Ethernet (FCoE) is an encapsulation of Fibre Channel frames so they can be sent over Ethernet networks.
FCoE allows the Fibre Channel protocol to be used on Ethernet networks of 10 Gbps or higher speeds.
You can use a specialized type of adapter called a converged network adapter (CNA) or, beginning with vSphere 5, you can connect any supported network adapter to a VMkernel port to be used for Fibre Channel.
The main advantage is that you do not have to support both a Fibre Channel fabric and an Ethernet network, but instead can consolidate all networking and storage to the Ethernet network.
The chief disadvantages are the higher cost of cards suitable for FCoE and the additional traffic on the Ethernet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

iSCSI

A

Common networking standard for linking data storage facilities that is based on the Internet Protocol (IP).
iSCSI facilitates data transfers by carrying SCSI commands over an IP network, generally the intranets of organizations.
It is mostly used on local-area networks (LANs), but it can also be used on wide-area networks (WANs), or even through the Internet with the use of tunneling protocols.
vSphere supports up to 10 Gbps iSCSI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

NAS

A

Network-attached storage (NAS) is file-level data storage provided by a computer that is specialized to provide not only the data but also the file system for the data.
In some ways, NAS is like a glorified mapped drive.
The similarity is that the data to which you are connecting is seen as a share on the NAS device.
The difference is that the device that is storing the data and providing the file system is specially designed for just this purpose and is generally extremely efficient at sharing the files.
Protocols that can be used on an NAS include Common Internet File Systems (CIFS) and Network File Systems (NFS).
The only one of these that is supported in vSphere is NFS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

VSAN

A

Virtual storage-area network (VSAN) is a new type of shared storage that is very different from all of the others previously discussed.
It leverages the local drives of hosts to create a virtual storage area made up of multiple physical drives that are dispersed among the hosts in a vSphere cluster.
Each VMs disk (vmdk file) is treated as a separate object that can be assigned attributes, such as how many times it should be replicated and over how many disks in the VSAN.
In addition, VSAN leverages the power of any additional solid state drives (SSDs) on the hosts in the cluster for read caching and write buffering to improve performance.
VSAN is simple to use after you have a vSphere 5.5 or later cluster, and you have created the VMkernel ports for it (at least one for each host) and enabled it in the cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Storage Naming Conventions.

A

Your naming convention will depend on the technology that you have chosen. If you are using local drives or a SAN technology, such as iSCSI or Fibre Channel, you will use a naming convention associated with a vmhba. If you are using NAS, your naming convention will be associated with the share name of the data source.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Storage Naming Conventions for Local and SAN.

A

Three most common naming conventions for local and SAN and a brief description of each:
1. Runtime name: Uses the convention vmhbaN:C:T:L -
vm stands for VMkernel.
hba is host bus adapter.
N is a number corresponding to the host bus adapter location (starting with 0).
C is channel, and the first connection is always 0 in relation to vSphere. (An adapter that supports multiple connections will have different channel numbers for each connection.)
T is target, which is a storage adapter on the SAN or local device.
L is logical unit number (LUN).

  1. Canonical name: The Network Address Authority (NAA) ID that is a unique identifier for the LUN. This name is guaranteed to be persistent even if adapters are added or changed and the system is rebooted.
  2. SCSI ID: The unique SCSI identifier that signifies the exact disk or disks that are associated with a LUN.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Hardware/Dependent Hardware/Software iSCSI Initiators.

A

…how much work you want the VMkernel to do with regard to iSCSI versus how much you want to pay for your NICs that will be used for iSCSI.

Two processes have to take place to create effective iSCSI storage:

  1. Discovery: The process of the host finding the iSCSI storage and identifying the LUNs that are presented.
  2. TCP offload: The process of deferring some of the management aspects of the TCP connection from the host’s CPU. The device or service that does this is referred to as the TCP Offload Engine (TOE).

The real question then is whether you want the VMkernel to be associated with discovery and/or with TOE. You have three choices, as follows:

  1. Hardware (independent hardware) iSCSI initiator: In this case, a smarter and more expensive adapter is used that provides for discovery of the LUN as well as TOE. This completely removes the responsibility from the VMkernel and from the processors on the host. VMkernel ports are not required for this type of card. The host has only to supply the card with the drivers, and the card does the rest. If you have determined that your VMkernel is overloaded, this is an option that can improve performance.
  2. Dependent hardware iSCSI initiator: In this case, the card provides for the TOE, but the VMkernel must first provide the discovery of the LUN. This takes some of the work off the VMkernel and the processors on the host, but not all of it. In addition, VMkernel ports are required for this type of card. If possible, they should be on the same subnet as the iSCSI array that contains the data. In addition, if possible, the cards should be dedicated to this service.
  3. Software iSCSI initiator: In this case, the VMkernel provides for the discovery of the LUNs as well as for the TOE. The disadvantage of this type of initiator is that the VMkernel is doing all the work. This fact does not necessarily mean that performance will suffer. If the VMkernel is not otherwise overloaded, benchmark tests show this type of initiator to be every bit as fast as the others. In addition, software initiators allow for options such as bidirectional Challenge Handshake Authentication Protocol (CHAP) and per-target CHAP.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Zoning.

A

As you can see in Figure 7-4, each component (also called a node) of a Fibre Channel fabric is identified uniquely by a 64-bit address that is expressed in hexadecimal, called a World Wide Name (WWN).
Two types of nodes:
1. storage processor
2. Fibre Channel host bus adapter (HBA).

The storage administrator can configure zoning on the Fibre Channel switches to control which WWNs can see which other WWNs through the switch fabric, also referred to as soft zoning.

In addition, the Fibre Channel switch might employ hard zoning, which determines which ports of the switch will be connected to storage processors.

The purpose of using both of these methods is to keep you from accidentally accessing storage processors that do not apply to you, and thereby accessing volumes that do not apply to you either.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Masking.

A

Whereas zoning is controlling which HBAs can see which SPs through the switch fabric, masking is controlling what the SPs tell the host with regard to the LUNs that they can provide.
In fact, masking cannot be done in the GUI of an ESXi 5.x or later host, but only on the command line.
So, what does that tell you? It should tell you that VMware recommends that the masking be done on the SP through communication with the storage administrator.
That way, everyone knows about it, and it does not cause “troubleshooting opportunities” later.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Scanning/Rescanning Storage.

A

When you add a new host, that host automatically scans up to 256 Fibre Channel SAN LUNs (0–255).
If you are installing the host locally, you might want to keep the Fibre Channel disconnected until you have the host installed, and then connect the Fibre Channel and perform the scan.

However, iSCSI storage is automatically scanned whenever you create and configure a new iSCSI storage adapter.

In the interim, if you make a change to the physical storage, you should rescan to make sure that your hosts see the latest physical storage options.
This is not done automatically because it takes resources to perform the scan, so VMware leaves it to your control as to when the scan should be done.

Figure 7-6

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Creating an NFS Share for Use with vSphere.

A

Your ESXi host will use one of its VMkernel ports to access the share.

You must configure the share with the appropriate permissions and other attributes (also known as flags) so that your ESXi host can use its VMkernel ports to gain access to the share.
The main aspects of your configuration should be as follows:

  1. A hostname or IP address that will be used as the target of the share. Take great care to always use the same IP address when connecting multiple hosts to the datastore. If the NAS device has two IP addresses, even for redundancy, it will be seen as two different datastores and not as one shared datastore.
  2. The shared folder or hierarchy of folders. This is case sensitive.
  3. Read-Write permissions for the share, so that you can configure your side for normal permissions or for Read-Only, as needed. ?You should not use Read-Only if you will be running VMs from this datastore.?
  4. Sync (synchronous) rather than asynchronous for the type of communication. If you are going to run VMs, you need the system to communicate that a task is done when it is actually done, not when it is listed to be done or when it has begun.
  5. No root_squash. As a security measure, most administrators configure NFS shares with root_squash so that an attack presenting itself as root will not be given privileges. (In fact, it is the default setting.) Because this might keep you from accessing the VM files, you should configure the NFS share with no root_squash.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Connecting to an NAS Device.

A

After the NFS server is created, connecting to the NAS device is rather simple.

You add a new storage location, but instead of it being another LUN, it is the share that you have created on the NAS.

You can then configure the new storage to be normal (Read and Write) or Read-Only.

!As mentioned before, use Read-Only only if you are configuring the share for ISO files and not to store VM files.!

Also, for better performance and security, you should use a different VMkernel port on a separate vmnic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Configuring and Editing Hardware and Dependent Hardware Initiators.

A

Independent Hardware Adapter: manufactured to handle both TOE and discovery, and therefore requires no VMkernel port.

Dependent Hardware iSCSI Adapter: specialized third-party adapter that you have purchased to install into your ESXi host (for example, a Broadcom 5709 NIC).
When you install the adapter, it presents two components to the same port:
1. simple network adapter
2. iSCSI engine.
After installation, the iSCSI engine appears on your list of storage adapters.
It is enabled by default, but to use it, you must associate it with a VMkernel port and then configure it.
You should also follow any third-party documentation associated with the card.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Configuring and Editing Software Initiator Settings.

A

Instead of purchasing expensive cards that perform the discovery or TOE, you can rely on the VMkernel to do both.
To configure a software iSCSI initiator, you must add an iSCSI software initiator and associate it to a VMkernel port.
It is a best practice to use a separate vmnic for each type of IP storage that you use.

17
Q

Configuring iSCSI Port Binding.

A

When you configure iSCSI port binding, you associate specific VMkernel ports to specific iSCSI adapters.
You can associate more than one so that if one should fail, the other can take its place.
In this way, you can create a multipath configuration with storage that presents only a single storage portal, such as DELL EqualLogic or HP/Lefthand.

Figure 7-14.