ZFS 0.8.0 Released

I noticed the SIMD changes have been merged on the ZoL side. Is there a ballpark for when these will make it into proxmox, even proxmox beta? Thanks!

we usually pull in upstream minor releases quickly, and that patch set is slated for inclusion in the next one or the one after it.
 
  • Like
Reactions: morph027
FYI: We backported SIMD for our ZFS Kernel module with pve-kernel in version 5.0.21-1 (released end of August 2019) so there should be no change regarding that - if you use recent kernels at least :)

A updated kernel and updated userland packages for ZFS 0.8.3 is currently getting tested, may get to the public testing repository today or start of next week, if all goes well.
 
  • Like
Reactions: morph027
ZFS is still awefully slow with me. (

Trying to run some low load vm's with a few Containers inside from a encrypted ZFS kills my Host for minutes. I have even thrown a Intel Optane on it. There's something seriously flawed with ZFS on Proxmox 6.1-5/9bf061 (Ryzen 2700x/64gb ECC/WD-Red + Optane 900p as SLOG)

1580486372445.png
 
ZFS is still awefully slow with me. (

Trying to run some low load vm's with a few Containers inside from a encrypted ZFS kills my Host for minutes. I have even thrown a Intel Optane on it. There's something seriously flawed with ZFS on Proxmox 6.1-5/9bf061 (Ryzen 2700x/64gb ECC/WD-Red + Optane 900p as SLOG)

View attachment 14585

What does your zfs config look like? Raidz? Mirror? Disk counts? Are you using the SSD for read cache or zil?
 
Two 10TB Drives as Mirror with a Optane as ZIL. But it doesn't matter if a zil is there or not. Really simple homelab setup.
 
Two 10TB Drives as Mirror with a Optane as ZIL. But it doesn't matter if a zil is there or not. Really simple homelab setup.

How does the output of "zpool iostat 1 10000000" look? Is it mostly reads or writes (post a snipit here is possible).

A 2 drive mechanical setup isn't going to provide much in regards to IOPS.
 
Yeah sure. I have other setups with some aged Debian + zfs 0.7x, that outperformance the crippled ZFS in Proxmox by far. I'll send stats in some mins since there's a live migration still ongoing..
 
Yeah sure. I have other setups with some aged Debian + zfs 0.7x, that outperformance the crippled ZFS in Proxmox by far. I'll send stats in some mins since there's a live migration still ongoing..

Is that setup the same? Are you running VM's on zvol's just like proxmox? Or are you simply using the zfs filesystem with no zvol's involved on the Debian setup. We use zfs within proxmox in many many setups with zero issues, but there are alot of options configuration wise to pin down for proper performance.
 
There are no zvol's involved on the other machine, just a ZFS that makes some exports. But on the other hand it's much weaker in terms of CPU (6y old HP Microserver with AMD Turion cpu only). I know it can't be compared 1:1, but I am pretty sure that what I see is not normal. It can't be. Still watiting for migrate to finnish to provide you with some solid data.
 
There are no zvol's involved on the other machine, just a ZFS that makes some exports. But on the other hand it's much weaker in terms of CPU (6y old HP Microserver with AMD Turion cpu only). I know it can't be compared 1:1, but I am pretty sure that what I see is not normal. It can't be. Still watiting for migrate to finnish to provide you with some solid data.

That is your issue then, they really can't be compared performance wise. I doubt the cpu aspect is that big unless its saturated. I had a feeling as we too have seen considerably better performance with a non zvol setup vs zvol. The only time we have found success with zvol's is having a large amount of vdev mirrors to get the IOPS up there.

However, we have found for our workloads, that moving to 128K volblocksize for the zvol's helps a ton and gets it slightly closer to non zvol performance. That would require you to re-write out all the data though.
 
This is how it looked like migrating from one pool to another. Basically rendering the whole machine hardly responsive..

six = zfs mirror without slog and unencrypted
fatbox = zfs mirror with slog and encrypted


1580492745487.png
 
This is how it looked like migrating from one pool to another. Basically rendering the whole machine hardly responsive..

View attachment 14588

Doesn't surprise me, imo 128k block would help here and not make the disks work so hard (As you really only have the power of 1 disk IOPS wise). However its never going to provide that much performance. Maybe a couple hundred IOPS.
 
Mean this? already done..

root@newton:~# zfs get recordsize six
NAME PROPERTY VALUE SOURCE
six recordsize 128K default
root@newton:~# zfs get recordsize six
NAME PROPERTY VALUE SOURCE
six recordsize 128K default
 
Mean this? already done..

root@newton:~# zfs get recordsize six
NAME PROPERTY VALUE SOURCE
six recordsize 128K default
root@newton:~# zfs get recordsize six
NAME PROPERTY VALUE SOURCE
six recordsize 128K default

Bummer, sounds like you need more vdev's to get the required IOPS to run zvol's for your VM's.
 
.. there is no heavy load on them. Other Cluster aware Filesystems migrate stuff smoothly in background and don't bascially kill your whole IO.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!