Chapters 4 - 6 Flashcards

1
Q

You have VUM installed, and you’ve configured it from the vSphere Desktop Client on your laptop. One of the other administrators on your team is saying that she can’t access or configure VUM and that there must be something wrong with the installation. What is the most likely cause of the problem?

A

The most likely cause is that the VUM plug-in hasn’t been installed in the other administrator’s vSphere Desktop Client. The plug-in must be installed on each instance of the vSphere Desktop Client in order to be able to manage VUM from that instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In addition to ensuring that all your ESX/ESXi hosts have the latest critical and security patches installed, you need to ensure that all your ESX/ESXi hosts have another specific patch installed. This additional
patch is noncritical and therefore doesn’t get included in the critical patch dynamic baseline. How do you work around this problem?

A

Create a baseline group that combines the critical patch dynamicbaseline with a fixed baseline that contains the additional patch you want installed on all ESX/ESXi hosts. Attach the baseline group to all your ESX/ESXi hosts. When you perform remediation, VUM will ensure that all the critical patches in the dynamic baseline plus the additional patch in the fixed baseline are applied to the hosts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You’ve just finished upgrading your virtual infrastructure to VMware vSphere. What two additional tasks should you complete?

A

Upgrade VMware Tools in the guest OSs and then upgrade the virtual machine hardware to version 11.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How can you avoid VM downtime when applying patches (for example, remediating) to your ESX/ESXi hosts?

A

VUM automatically leverages advanced VMware vSphere features like Distributed Resource Scheduler (DRS). If you make sure that your ESX/ESXi hosts are in a fully automated DRS cluster, VUM will leverage vMotion and DRS to move VMs to other ESX/ESXi hosts, avoiding downtime to patch the hosts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which VUM functionality can simplify the process of upgrading vSphere across a large number of hosts and their VMs?

A

VUM can take care of these interactions in an automated fashion with what is known as an orchestrated upgrade. An orchestrated upgrade combines several baseline groups that include updates for the hosts and subsequent updates for the VMs’ hardware and VMware Tools.
Virtual appliance upgrade baselines can also be included. When combined with fully automated DRS clusters and sufficient redundant capacity,
potentially an entire vCenter’s host inventory can be upgraded in one orchestrated task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Without using VUM, how else can you upgrade an existing host?

A

You can grab the CD install media and run an interactive
upgrade on the host. Or you can use the inherent command-line tool on the hosts’ themselves: esxcli software vib update (see VMware Knowledge Base article 2008939 for full details) or esxcli software vib install to patch them with individual VIBs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You have just started a new job as the vSphere administrator at a company. The company hasn’t reviously centralized the hosts’ logs and you decide you want to collect them, and so you want to install the vSphere Syslog Collector tool and the ESXi Dump Collector tool as well. How do you install them on the company’s vCSA instance?

A

The Syslog Collector and ESXi Dump Collector are already included in vCSA and enabled by default. You should log into the vCSA console and check that the services are running and adjust the core dump’s repository so it’s large enough for their environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

List the ways you can configure your hosts for centralized logging.

A

You can send core dumps to the ESXi Dump Collector by using
esxcli system coredump
at each host’s command line.
Use the Host Profiles feature in vCenter to propagate the same setting across multiple hosts, or continue to use the CLI on each host. Use the Web Client to configure each host via its advanced settings under
syslog .global.
Set each host via the CLI with
esxcli system syslog.
Use the Host Profiles feature in vCenter to propagate the same setting across multiple hosts, or continue to use one of the previous two methods on each host.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What factors contribute to the design of a virtual network and the components involved?

A

any factors contribute to a virtual network design: the number of physical network adapters in each ESXi host, using vSphere Standard Switches versus vSphere Distributed Switches, the presence or use of VLANs in the environment, the existing network topology, requirements for the support of LACP or port mirroring, and the connectivity needs of the VMs in the environment are all factors that will play a role in the final network design. These are some common questions to ask while designing
the network:
Do you have or need a dedicated network for management traffic, such
as for the management of physical switches?

Do you have or need a dedicated network for vMotion traffic?

Are you using 1 Gb Ethernet or 10 Gb Ethernet?

Do you have an IP storage network? Is this IP storage network a dedicated network? Are you running iSCSI or NAS/NFS?

Do you need extremely high levels of fault tolerance for VMs?

Is the existing physical network composed of VLANs?

Do you want to extend the use of VLANs into the virtual switches?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You’ve asked a fellow vSphere administrator to create a vSphere Distributed Switch for you, but the administrator can’t complete the task because he can’t find out how to do this with an ESXi host selected in thevSphere Web Client. What should you tell this administrator?

A

vSphere Distributed Switches aren’t created on a per–ESXi host basis but instead span multiple ESXi hosts at the same time. This is what enables the centralized configuration and management of distributed port groups. Tell the administrator to navigate to the Distributed Switches area of the vSphere Web Client to create a new vSphere Distributed Switch.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

As a joint project between the networking and server teams, you are going to implement LACP in your VMware vSphere 5.5 environment. What are some limitations you need to know about?

A

While vSphere 5.5 and vSphere 6.0 boast enhanced LACP support over previous versions of vSphere, there are still limitations. You can’t have multiple active link aggregation groups (LAGs) for a particular distributed port group. You also can’t have both LAGs and stand-alone uplinks active for a given distributed port group. However, different distributed port groups could use different LAGs, if desired. The enhanced LACP support also requires the use of a version 5.5.0 vSphere Distributed Switch.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You’d like to use NIC teaming to bond multiple physical uplinks together for greater redundancy and improved throughput. When selecting the NIC teaming policy, you select Route Based On IP Hash, but then the vSwitch seems to lose connectivity. What could be wrong?

A

The Route Based On IP Hash load-balancing policy requires that the physical switch also be configured to support this arrangement. This is accomplished through link aggregation, referred to as EtherChannel in the
Cisco environment. Without an appropriate link aggregation configuration on the physical switch, using the IP hash load-balancing policy will result in a loss of connectivity. One of the other load-balancing policies, such as the default policy Route Based On Originating Virtual Port ID, may be more appropriate if the configuration of the physical switch cannot be modified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do you configure both a vSphere Standard Switch and a vSphere Distributed Switch to pass VLAN tags all the way up to a guest OS?

A

n a vSphere Standard Switch, you configure Virtual Guest Tagging (VGT, the name of this particular configuration) by setting the VLAN ID for the VM’s port group to 4095.

On a vSphere Distributed Switch, you enable VGT by setting the VLAN configuration for a distributed port group to VLAN Trunking and then specifying which VLAN IDs should be passed up to the guest OS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What three third-party virtual switches are, as of this writing, available for vSphere environments?

A

As of this writing, the three third-party virtual switches available for use with vSphere are the Cisco Nexus 1000V, the IBM Distributed Virtual Switch 5000V, and the HP FlexFabric 5900v.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You have a networking application that needs to see traffic on the virtual network that is intended for other production systems on the same VLAN. The networking application accomplishes this by using Promiscuous mode. How can you accommodate the needs of this networking application without sacrificing the security of the entire virtual switch?

A

Because port groups (or distributed port groups) can override the security policy settings for a virtual switch, and because there can be multiple port groups/distributed port groups that correspond to a VLAN, the best solution involves creating another port group that has all the same settings as the other production port group, including the same VLAN ID.
This new port group should allow Promiscuous mode. Assign the VM with the networking application to this new port group, but leave the remainder of the VMs on a port group that rejects Promiscuous mode. This allows thenetworking application to see the traffic it needs to see without overly compromising the security of the entire virtual switch.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Another vSphere administrator on your team is trying to configure the security policies on a distributed switch but is having some difficulty. What could be the problem?

A

On a vSphere Distributed Switch, all security policies are set at the distributed port group level, not at the distributed switch level. Tell the administrator to modify the properties of the distributed port group(s), not the distributed switch itself. She can also use the Manage Distributed Port Groups command on the Actions menu in the vSphere Web Client to perform the same task on multiple distributed port groups at the same time.

17
Q

Identify examples where each of the protocol choices would be ideal for different vSphere deployments.

A

iSCSI would be a good choice for a customer with no existing Fibre Channel SAN and getting started with vSphere. Fibre Channel would be a good choice for a customer with an existing Fibre Channel infrastructure or for those that have VMs with high-bandwidth (200
MBps+) requirements (not in aggregate but individually). NFS would be a good choice where there are many VMs with a low-bandwidth requirement individually (and in aggregate) that is less than a single link’s worth of bandwidth.

18
Q

Identify the three storage performance parameters and the primary determinant of storage performance and how to quickly estimate it for a given storage configuration.

A

The three factors to consider are bandwidth (MBps), throughput (IOPS), and latency (ms). The maximum bandwidth for a single datastore (or RDM) for Fibre Channel is the HBA speed times the number of HBAs
in the system (check the fan-in ratio and number of Fibre Channel ports on the array). The maximum bandwidth for a single datastore (or RDM)
for iSCSI is the NIC speed times the number of NICs in the system, up to about 9 Gbps (check the fan-in ratio and number of Ethernet ports on the array). The maximum bandwidth for a single NFS datastore for NFS is the NIC link speed (across multiple datastores, the bandwidth can be balanced
across multiple NICs). In all cases, the throughput (IOPS) is primarily a function of the number of spindles (assuming no cache benefit and noRAID loss). A quick rule of thumb is that the total number of IOPS = IOPS
× the number of that type of spindle. Latency is in milliseconds, though it can get to tens of milliseconds in cases where the storage array is overtaxed.

19
Q

Characterize use cases for VMFS datastores, NFS datastores, and RDMs.

A

VMFS datastores and NFS datastores are shared-container models; they store virtual disks together. VMFS is governed by the block storage stack, and NFS is governed by the network stack. NFS is generally
(without use of 10 GbE LANs) best suited to large numbers of low bandwidth (any throughput) VMs. VMFS is suited for a wide range of workloads. RDMs should be used sparingly for cases where the guest must
have direct access to a single LUN.

20
Q

If you’re using VMFS and there’s one performance metric to track, what would it be? Configure a monitor for that metric.

A

The metric to measure is queue depth. Use resxtop to monitor. The datastore-availability or used-capacity managed datastore alerts are good nonperformance metrics to use.

21
Q

What would best identify an oversubscribed VMFS datastore from a performance standpoint? How would you identify the issue? Whatis it most likely to be? What would be two possible corrective actions you could take?

A

An oversubscribed VMFS datastore is best identified by
evaluating the queue depth and would manifest as slow VMs. The best way to track this is with resxtop,
using the QUED (the Queue Depth column). If the queue is full, take any or all of these courses of action: make the queue deeper and increase the Disk.SchedNumReqOutstanding advanced parameter to match; vacate VMs (using Storage vMotion); or add more
spindles to the LUN so that it can fulfill the requests more rapidly or move to a faster spindle type.

22
Q

A VMFS volume is filling up. What are three possible

nondisruptive corrective actions you could take?

A

The actions you could take are as follows:
Use Storage vMotion to
migrate some VMs to another datastore.

Grow the backing LUN, and grow the VMFS volume.

Add another backing LUN, and add another VMFS extent.

23
Q

What would best identify an oversubscribed NFS volume from a performance standpoint? How would you identify the issue? What is it most likely to be? What are two possible corrective actions you could take?

A

The workload in the datastore is reaching the maximum
bandwidth of a single link. The easiest way to identify the issue would be
using the vCenter performance charts and examining the VMkernel NIC’s utilization. If it is at 100 percent, the only options are to upgrade to 10 GbE or to add another NFS datastore, add another VMkernel NIC, follow the load-balancing and high-availability decision tree to determine whether NIC teaming or IP routing would work best, and finally, use Storage vMotion to migrate some VMs to another datastore (remember that the
NIC teaming/IP routing works for multiple datastores, not for a single datastore). Remember that using Storage vMotion adds additional work to an already busy datastore, so consider scheduling it during a low I/O period, even though it can be done live.

24
Q

Without turning the machine off, convert the virtual disks on a VMFS volume from thin to thick (eager zeroed thick) and back to thin.

A

Use Storage vMotion and select the target disk format during the Storage vMotion process.

25
Q

Identify where you would use a physical compatibility mode RDM, and configure that use case.

A

One use case would be a Microsoft cluster (either W2K3 with MSCS or W2K8/2012 with WFC). You should download the VMware Microsoft clustering guide and follow that use case. Other valid answers are a case where virtual-to-physical mobility of the LUNs is required or one where a Solutions Enabler VM is needed.

26
Q

Quickly estimate the minimum usable capacity needed for 200 VMs with an average VM size of 40 GB. Make some assumptions about vSphere snapshots. What would be the raw capacity needed in the array if you used RAID 10? RAID 5 (4+1)? RAID 6 (10+2)? What would you do to nondisruptively cope if you ran out of capacity?

A

Using rule-of-thumb math, 200 × 40 GB = 8 TB × 25 percent extra space (snapshots, other VMware files) = 10 TB. Using RAID 10, you would need at least 20 TB raw. Using RAID 5 (4+1), you would need 12.5
TB. Using RAID 6 (10+2), you would need 12 TB. If you ran out of capacity, you could add capacity to your array and then add datastores and use Storage vMotion. If your array supports dynamic growth of LUNs, you could grow the VMFS or NFS datastores, and if it doesn’t, you could add more VMFS extents.

27
Q

Estimate the number of spindles needed for 100 VMs that drive 200 IOPS each and are 40 GB in size. Assume no RAID loss or cache gain. How many if you use 500 GB SATA 7200 RPM? 300 GB 10K Fibre Channel/SAS? 300 GB 15K Fibre Channel/SAS? 160 GB consumer-grade SSD? 200 GB enterprise flash?

A

This exercise highlights the foolishness of looking just at
capacity in the server use case. 100 × 40 GB = 4 TB usable × 200 IOPS =
20,000 IOPS. With 500 GB 7200 RPM, that’s 250 drives, which have 125 TB raw (non-optimal). With 300 GB 10K RPM, that’s 167 drives, which have 50 TB raw (non-optimal). With 15K RPM, that’s 111 drives with 16 TB raw (getting closer). With consumer-grade SSD, that’s 20 spindles and 3.2GB raw (too little). With EFD, that’s 4 spindles and 800 GB raw (too little).
The moral of the story is that the 15K RPM 146 GB drive is the sweet spot for this workload. Note that the extra space can’t be used unless you can find a workload that doesn’t need any performance at all; the spindles are
working as hard as they can. Also note that the 4 TB requirement was usable, and I was calculating the raw storage capacity. Therefore, in this case, RAID 5, RAID 6, and RAID 10 would all have extra usable capacity in
the end. It’s unusual to have all VMs with a common workload, and 200 IOPS (as an average) is relatively high. This exercise also shows why it’s efficient to have several tiers and several datastores for different classes of VMs (put some on SATA, some on Fibre Channel, some on EFD or SSD)— because you can be more efficient.