We use Linstor on ZFS and then provision VMs with Linstor as storage. It does not matter what FS your guest is using.
Very interesting. How many iops did you get on Guest on node1 and node3?In our testing with ZFS 2.3, Linstor on NVMe devices, we've achieved impressive results, particularly when leveraging NVMe-oF (RDMA) over a 100 Gbit ring topology in a three-node cluster, 2x mirror on two nodes 1 and 2, accessed from diskless node 3. Our primary storage configurations include NVMe and SAS SSDs, with the most significant performance gains coming from Gen4 and Gen5 NVMe drives - a result that was expected but still remarkable, up to 2x and reliable constant 100GBit wire speed, even more locally.
Key Takeaways from Our Testing
- Diskless Node Access: The real game-changer was accessing storage from a diskless node via NVMe-oF (RDMA), maximizing throughput and minimizing latency.
- Network Efficiency: Running on a 100 Gbit ring topology ensured ultra-low-latency data access across the cluster and is very affordeable.
- Stability & Reliability: Since the beginning of our tests with ZFS 2.3-rc1, we have encountered zero issues - a testament to its robustness.
- Dataset-Level Control: One of ZFS strengths is the ability to control storage behavior dynamically at the dataset level during runtime.
Ready for Production?
Based on our experience, we confidently recommend using ZFS 2.3 in production environments. The combination of high-speed networking, NVMe-oF, and Gen5 NVMe drives delivers exceptional performance while maintaining stability. However, this is our personal experience, not a paid endorsement - just a genuine and personal recommendation from the field.
I want to compare iops on 2 local nvme vs raidz1 zfs 2.3 direct i/o mirror on thiese nvme.Sorry to not push this further - to my experience, those numbers don't mean anything without context. It highly depend on other hardware like CPU, Networking in this specific case etc. - I don't want to place here figures which might get wrongly interpreted. What exactly are you looking for, to compare those numbers w
I will be happy to start testing on new topic. My hardware is 4x Micron 7400 1,92TB, 2x Micron 7400 3,84 TB and 2x DL380G10 connected by Mellanox card with 100G RDMA.We don't run raid-z in out production environments. We run Linstor with HA and duplication. The whole idea was to look at possible DirectIO improvements in ZFS 2.3 which is significant (1.3x - 2x depending on HW) in relevant write scenarios.
ZFS 2.3 does add much more improvements and bugfixes including live add disks to raid-z arrays, but this was not so my focus doing this.
We use CD8P-V-Serie Gen5 NVMes on ZEN 4 platform and DDR5-4800, we now get consistent 10GB/s throughput from diskless nodes and close to 12GB/s on local system.
Since we upgraded all our servers to ZFS 2.3, we built the easy to use ZFS modules (with kernel 6.11.11 and dkms support) as all our servers in production do boot from ZFS as well.
For anybody looking to get this running, again, feel free to use the modules. If anybody looking for help, we are happy to support this, as good as we can in an open source-ish manner!
If you want to deep dive more in benchmarks etc. I think better spin off a new topic and or PM me and we run some tests together ;-)
dkms is already the newest version (3.0.10-8+deb12u1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
gpg: no valid OpenPGP data found.
Hit:1 http://security.debian.org bookworm-security InRelease
Hit:2 http://download.proxmox.com/debian/pve bookworm InRelease
Hit:3 http://ftp.us.debian.org/debian bookworm InRelease
Hit:4 http://ftp.us.debian.org/debian bookworm-updates InRelease
Ign:5 https://download.webmin.com/download/newkey/repository stable InRelease
Hit:6 https://download.webmin.com/download/newkey/repository stable Release
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
All packages are up to date.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package libnvpair3 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
libnvpair3linux
Package libuutil3 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
libuutil3linux
Package zfs is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'zfs' has no installation candidate
E: Unable to locate package libzfs6
E: Package 'libnvpair3' has no installation candidate
E: Package 'libuutil3' has no installation candidate
E: Unable to locate package libzfs6-devel
E: Unable to locate package libzpool6
E: Unable to locate package pam-zfs-key
update-initramfs: Generating /boot/initrd.img-6.8.12-8-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/6D82-EDB6
Copying kernel and creating boot-entry for 6.8.12-4-pve
Copying kernel and creating boot-entry for 6.8.12-8-pve
We use essential cookies to make this site work, and optional cookies to enhance your experience.