CPU Performance Analysis Flashcards

1
Q

Host: CPU usage is consistently high.

Virtual machine: CPU usage is above 90%. CPU ready is above 20%. Application performance is poor.

Likely Causes?

A

The host has insufficient CPU resources to meet the demand.

Too many virtual CPUs are running on the host.

Storage or network operations are placing the CPU in a wait state.

The guest OS generates too much load for the CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Host: CPU usage is consistently high.

Virtual machine: CPU usage is above 90%. CPU ready is above 20%. Application performance is poor.

Potential Solutions?

A

Add the host to a DRS cluster.

Increase the number of hosts in the DRS cluster.

Migrate one or more virtual machines to other hosts.

Upgrade the physical CPUs of the host.

Upgrade ESXi to the latest version.

Enable CPU-saving features such as TCP segmentation offload, large memory pages, and jumbo frames.

Increase the amount of memory allocated to the virtual machines, which may improve cached I/O and reduce CPU utilization.

Reduce the number of virtual CPUs assigned to virtual machines.

Ensure that VMware Tools is installed.

Compare the CPU usage of troubled virtual machines with that of other virtual machines on the host or in the resource pool. (Hint: Use a stacked graph.)

Increase the CPU limit, shares, or reservation on the troubled virtual machine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Host: Memory usage is consistently 94% or higher. Free memory is 6% or less.

Virtual machine: Swapping is occurring. (Memory usage may be high or low.)

Likely Causes?

A

The host has insufficient memory resources to meet the demand.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Host: Memory usage is consistently 94% or higher. Free memory is 6% or less.

Virtual machine: Swapping is occurring. (Memory usage may be high or low.)

Potential Solutions?

A

Ensure that VMware Tools is installed and that the balloon driver is enabled for all virtual machines.

Reduce the memory size on oversized virtual machines.

Reduce the memory reservation of virtual machines where it is set higher than needed.

Add the host to a DRS cluster.

Increase the number of hosts in the DRS cluster.

Migrate one or more virtual machines to other hosts.

Add physical memory to the host.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Virtual machine: Memory usage is high.

Guest OS: Memory usage is high. Paging is occurring.

Likely Causes?

A

The guest OS is not provided sufficient memory by the virtual machine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Virtual machine: Memory usage is high.

Guest OS: Memory usage is high. Paging is occurring.

Potential Solutions?

A

Increase the memory size of the virtual machine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Virtual machine: CPU ready is low.

Guest OS: CPU utilization is high.

Likely Causes?

A

The guest OS is not provided sufficient CPU resources by the virtual machine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Virtual machine: CPU ready is low.

Guest OS: CPU utilization is high.

Potential Solutions?

A

Increase the number of CPUs for the virtual machine.

Migrate the virtual machine to a host with faster CPUs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Datastore: Space utilization is high.

Likely Causes?

A

Snapshot files are consuming a lot of datastore space.

Some virtual machines are provisioned with more storage space than required.

The datastore has insufficient storage space to meet the demand.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Datastore: Space utilization is high.

Potential Solutions?

A

Delete or consolidate virtual machine snapshots.

Convert some virtual disks to thin provisioned.

Migrate one or more virtual machines (or virtual disks) to other datastores.

Add the datastore to a Storage DRS datastore cluster. Add datastores with available space to the datastore cluster.

Add more storage space to the datastore.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Disk: Device latency is greater than 15 ms.

Likely Causes?

A

Problems are occurring with the storage array.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Disk: Device latency is greater than 15 ms.

Potential Solutions?

A

Migrate the virtual machines to datastores backed by other storage arrays.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Disk: VMkernel latency is greater than 4 ms. Queue latency is greater than zero.

Likely Causes?

A

The maximum throughput of a storage device is not sufficient to meet the demand of the current workload.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Disk: VMkernel latency is greater than 4 ms. Queue latency is greater than zero.

Potential Solutions?

A

Migrate the virtual machines to datastores backed by storage devices (LUNs) with more spindles.

Balance virtual machines and their disk I/O across the available physical resources. Use Storage DRS I/O balancing.

Add more disks (spindles) to the storage device backing the datastore.

Configure the queue depth and cache settings on the RAID controllers. Adjust the Disk.SchedNumReqOutstanding parameter.

Configure multipathing.

Increase the memory size of the virtual machine to eliminate any guest OS paging. Increase the guest OS caching of disk I/O.

Ensure that no virtual machine swapping or ballooning is occurring.

Defragment guest file systems.

Use eager zeroed thick provisioned virtual disks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Network: The number of packets dropped is greater than zero. Latency is high. The transfer rate is low.

Likely Causes?

A

The maximum throughput of a physical network adapter is not sufficient to meet the demand of the current workload.

Virtual machine network resource shares are too few.

Network packet size is too large, which results in high network latency. Use the VMware AppSpeed performance monitoring application or a third-party application to check network latency.

Network packet size is too small, which increases the demand for the CPU resources needed for processing each packet. Host CPU, or possibly virtual machine CPU, resources are not enough to handle the load.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Network: The number of packets dropped is greater than zero. Latency is high. The transfer rate is low.

Potential Solutions?

A

Install VMware Tools on each virtual machine and configure the guest OS to use the best-performing network adapter driver (such as vmxnet3).

Migrate virtual machines to other hosts or to other physical network adapters.

Verify that all NICs are running in full duplex mode.

Implement TCP Segmentation Offload (TSO) and jumbo frames.

Assign additional physical adapters as uplinks for the associated port groups.

Replace physical network adapters with high-bandwidth adapters.

Place sets of virtual machines that communicate with each other regularly on the same ESXi host.

17
Q

Performance charts are empty.

Likely Causes?

A

Some metrics are not available for pre-ESXi 5.0 hosts.

Data is deleted when you remove objects to vCenter Server or remove them.

Performance chart data for inventory objects that were moved to a new site by VMware vCenter Site Recovery Manager is deleted from the old site and not copied to the new site.

Performance chart data is deleted when you use VMware vMotion across vCenter Server instances.

Real-time statistics are not available for disconnected hosts or powered-off virtual machines.

Non-real-time statics are rolled up at specific intervals. For example, 1-day statistics might not be available for 30 minutes after the current time, depending on when the sample period began.

The 1-day statistics are rolled up to create one data point every 30 minutes. If a delay occurs in the roll-up operation, the 1-week statistics might not be available for 1 hour after the current time. It takes 30 minutes for the 1-week collection interval, plus 30 minutes for the 1-day collection interval.

The 1-week statistics are rolled up to create one data point every two hours. If a delay occurs in the roll-up operations, the 1-month statistics might not be available for 3 hours. It takes 2 hours for the 1-month collection interval, plus 1 hour for the 1-week collection interval.

The 1-month statistics are rolled up to create one data point every day. If a delay occurs in the roll-up operations, the statistics might not be available for 1 day and 3 hours. It takes 1 day for the past year collection interval, plus 3 hours for the past month collection interval. During this time, the charts are empty.

18
Q

Performance charts are empty.

Potential Solutions?

A

Upgrade hosts to a later version of ESXi.

Allow time for data collection on objects that were recently added, migrated, or recovered to the vCenter Server.

Power on all hosts and allow time for real-time statistics to collect.

Allow time for the required roll-ups for non-real-time statistics.