Proxmox 4.4 performance with ZFS+NVME

dswartz

Renowned Member
Dec 13, 2010
286
9
83
So I got a couple of Samsung 1TB 960 PRO drives. I tried to use them with ESXi and Xenserver, but performance in both cases sucked. I had created a simple mirror using them. In both cases, I tried using a virtual storage appliance and exporting the ZFS datastore via iSCSI or NFS. I was lucky to get 1/4 of the raw throughput the drives can put out. Testing was with crystaldiskmark 64-bit in a win7 VM. I then installed proxmox 4.4 and set up the mirror again. Instead of using native ZFS in the GUI (which creates zvols), I created the dataset nvme/proxmox manually, and then told the installer 'use that directory'. I created a win7 and changed the drive from IDE to VIRTIO, and ran the test. Here are the numbers using a RAW vdisk:

-----------------------------------------------------------------------
CrystalDiskMark 5.2.1 (C) 2007-2017 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 6969.113 MB/s
Sequential Write (Q= 32,T= 1) : 3215.869 MB/s
Random Read 4KiB (Q= 32,T= 1) : 329.162 MB/s [ 80361.8 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 185.008 MB/s [ 45168.0 IOPS]
Sequential Read (T= 1) : 3065.247 MB/s
Sequential Write (T= 1) : 1768.979 MB/s
Random Read 4KiB (Q= 1,T= 1) : 104.388 MB/s [ 25485.4 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 63.974 MB/s [ 15618.7 IOPS]

Test : 8192 MiB [C: 28.7% (9.2/31.9 GiB)] (x5) [Interval=5 sec]
Date : 2017/03/17 14:26:12
OS : Windows 7 Professional SP1 [6.1 Build 7601] (x86)

(note that qcow2 was much inferior - the numbers were about 1/2 as good as raw)...
 
Yes, I meant to mention that. That's fine, from my POV. My point was that the other two hypervisor solutions sucked for reads as well as writes, due to hypervisor I/O stack limitations, as well as being limited by the LAN connection between the hypervisor and storage appliance. I did try using xenserver with local storage to eliminate that, but that was also disappointing. Apparently, all I/O goes through dom0 (the control domain), which results in disappointing throughput.
 
I usually get great performance using normal sas/sata disks with slog and l2arc on ssd and using zvols. Qcow2 or raw in directory performs worse. I get better performance using virtio vs. virtio-scsi in Windows. This migt vary somewhat per system but I suggest trying zvols, too, for vm images.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!