I love the new server, with it's enterprise level features, and its ability to just use any old SATA drive.
As a special treat I got an SSD as one of the datastores, as well as an existing regular HDD (aka spinning rust).
However, when I started using VMs in anger I was disappointed by the speed they were performing at... in fact things felt slower than on the desktop!
I had installed ESXi on a USB stick and used the latest build direct from the HP website, so everything must be fine on that end... so I must be imagining it, right... right?
The first thing I did was move my VM over from the HDD to the SDD (which took ages for only 20Gb) and observed that it did not feel significantly quicker.
So I whipped out IOmeter and started to do some benchmarking.
I was seeing only 1.1 MB/s read/write on the HDD, having nothing to compare this to, I shrugged and ran a test on the SSD and expected to get a significant improvement... but no all I got was 1.5MB/s. Even worse, I ran IOmeter against my SAN (an HP Gen8 running unRAID and 4 HDDs) and was seeing 89 MB/s!
After some ninja googling, I came across this article
It appears that the driver for the built in HP disks is faulty in the current version and you need to load up an old version to get your performance back.
So, here's how to fix it:
Copy the v88 driver from here: http://vibsdepot.hp.com/hpq/nov2014/esxi-550-drv-vibs/hpvsa/
(Don't worry if you are running ESX6, this will still work despite saying ESX5.5 in the file name).
- Stop all VMs
- Enable ssh-conection if it is not already turned on
- Copy "scsi-hpvsa-5.5.0-88OEM.550.0.0.1331820.x86_64.vib" to /tmp (using WinSCP)
- Start ssh-conection (using putty)
- change directory to /tmp
cd /tmp - Copy the vib file to /var/log/vmware
cp scsi-hpvsa-5.5.0-88OEM.550.0.0.1331820.x86_64.vib /var/log/vmware/ - Start maintenanceMode
esxcli system maintenanceMode set --enable true - Deinstall the running scsi-hpvsa driver
esxcli software vib remove -n scsi-hpvsa -f
This may take a few minutes to complete... - Install scsi-hpvsa-5.5.0-88
esxcli software vib install -v file:scsi-hpvsa-5.5.0-88OEM.550.0.0.1331820.x86_64.vib --force --no-sig-check --maintenance-mode - Restart ESXi
- Disable maintenance mode
- Start VMs
And what was the result?
HDD now 6.15 MB/s (a 459% increase)
SSD now 55.85 MB/s (a 3623% increase!!!!)
Wow!
UPDATE for VMWare 6.5 Update 1
Upon rebooting with the new (old) driver, my VMware instance did not mount the existing HDDs automatically.
To get round this I ran
esxcfg-volume -l
Scanning for VMFS-3/VMFS-5 host activity (512 bytes/HB, 2048 HBs).
VMFS UUID/label: 57ea5aca-e9e426b3-fcce-6805ca2ee445/HDD1
Can mount: Yes
Can resignature: Yes
Extent name: t10.ATA_____ST1000DM0032D1ER162__________________________________Z4Y3LBLN:1 range: 0 - 953599 (MB)
Scanning for VMFS-3/VMFS-5 host activity (512 bytes/HB, 2048 HBs).
VMFS UUID/label: 57ea59d4-98d844d8-e3c8-6805ca2ee445/SSD1
Can mount: Yes
Can resignature: Yes
Extent name: t10.ATA_____Crucial_CT256MX100SSD1__________________________14510E1BBC87:1 range: 0 - 243967 (MB)
Upon rebooting with the new (old) driver, my VMware instance did not mount the existing HDDs automatically.
To get round this I ran
esxcfg-volume -l
Scanning for VMFS-3/VMFS-5 host activity (512 bytes/HB, 2048 HBs).
VMFS UUID/label: 57ea5aca-e9e426b3-fcce-6805ca2ee445/HDD1
Can mount: Yes
Can resignature: Yes
Extent name: t10.ATA_____ST1000DM0032D1ER162__________________________________Z4Y3LBLN:1 range: 0 - 953599 (MB)
Scanning for VMFS-3/VMFS-5 host activity (512 bytes/HB, 2048 HBs).
VMFS UUID/label: 57ea59d4-98d844d8-e3c8-6805ca2ee445/SSD1
Can mount: Yes
Can resignature: Yes
Extent name: t10.ATA_____Crucial_CT256MX100SSD1__________________________14510E1BBC87:1 range: 0 - 243967 (MB)
esxcfg-volume -m HDD1
esxcfg-volume -m SSD1
and that fixed the issue :)