Poor disk speed using KVM on ZFS

melanch0lia

New Member
Jul 31, 2014
25
0
1
Using ZFS 0.6.2 on RAID10 and Proxmox as VE for my KVM machines powered with cache=writeback (cache=none not starting)

Deduplication disabled, primarycache/secondarycache=all, checksum/compression ON. compressratio 1.71X, sync standard.

in KVM (qcow2)
++++++++++++++
$ dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 39.2076 s, 13.7 MB/s
++++++++++++++


out of KVM - just directly to ZFS partition
++++++++++++++
# dd bs=1M count=512 if=/dev/zero of=/var/lib/vz/test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 0.851028 s, 631 MB/s
++++++++++++++

Any tips?
No results with:
- changing interface from IDE to virtio
- decreasing/increasing zfs_arc_max

VMs just hanging out on simple disk loads.

++++++++++++++

I think that the problem is in conjunction between KVM and ZFS.

Using OpenVZ container
++++++++++++++
$ dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 0.541697 s, 702 MB/s
++++++++++++++
 
Last edited:
offtopic: is this direct/native ZFS install on pve host with raid controller or is just pool of striped mirrors that operates like 'raid10'?
 
Strange result you have!

Qcow2 over ZFS' NFS server:
dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 7.86987 s, 68.2 MB/s

RAW over ZFS via iSCSI:
dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 5.68464 s, 94.4 MB/s



On the storage server:
dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 1.39116 s, 386 MB/s
 
offtopic: is this direct/native ZFS install on pve host with raid controller or is just pool of striped mirrors that operates like 'raid10'?
It's direct ZFS install on PVE host with raid controller using ZFSOnLinux.

Pretty stable configuration and i'm loving it.

Strange result you have!

Qcow2 over ZFS' NFS server:
dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 7.86987 s, 68.2 MB/s

RAW over ZFS via iSCSI:
dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 5.68464 s, 94.4 MB/s

On the storage server:
dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 1.39116 s, 386 MB/s
Nothing strange.

You are using ZFS over network transfer, i'm having direct ZFS install on PVE host.

So that first performs the network and actually write to disk occurs at the same rate.

Any way, problem is fixed.
 
Last edited:
It is strange that you see a performance penalty which is 50 times slower that directly on the pool. Mine showing only 5 times slower but this also includes network! Your test should have shown at leased 80-90% of direct speed for Qcow2 (=~ 500 MB/s)
 
Never use a raid controller, and then put ZFS on top of it. ZFS does it's own version of "Raid", and needs direct access to the disks to do so correctly. Flash your card over to IT mode, or if that's not an option, make each disk it's own Raid0 array, and then build zfs on top of those into a pool.

Hardware/Software Raid + ZFS = No no no

Zfs + Direct disk access = Yeah Yeah yeah.
 
Sorry for hijacking this post.
I have a proxmox host with 4x1Tb hdds in mirrored stripe setup and all vms are located on a zfs dataset.
Now I am thinking to add another proxmox host with the same hardware config and zfs as backend storage.
I would like to test also live migration between these two nodes.
So I am thinking of two possible scenarios:
1. zfs -> zvols -> drbd -> lvm
and
2. zfs -> separate zfs datasets for each vm -> glusterfs

Both scenarios should work.I was wondering if anyone has tested such setup and which of those has the better i/o performance
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!