ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

gogito

Member
Jan 12, 2022
19
2
23
27
Distribution Name | Proxmox 9.1.5 Debian Trixie 13
Distribution Version | Proxmox 9.1.5 Debian Trixie 13
Kernel Version | Linux 6.17.9-1-pve
Architecture | x86_64
OpenZFS Version | zfs-2.4.0-pve1 - zfs-kmod-2.4.0-pve1

My zpool has 2 device: sdb (HDD) and sda (nvme)

The issue is all dataset report the remaining free space by using the “HDD (data) size - (HDD + NVME usage) = 448GB” while hardware wise the HDD has 1.6T and the NVME has 800G left.

My expectation would be for it to consider the HDD remaining (1.6T) as actual free space. As in report the free space by what’s available left to write to, not what using the substraction since that gives the wrong info to other programs.

Is this a bug?

1770622326996.png
 
What is the use case for using zfs and disks in this way? This is not recommended way, if i have only two disks this way i would use bcachefs.
 
What is the use case for using zfs and disks in this way? This is not recommended way, if i have only two disks this way i would use bcachefs.
Well ZFS allow a special_device to store metadata as well as small file to speed up the pool. Previously, I split my NVME in 2, 1800G for a nvme only pool for zvol and 200GB as special for the HDD pool (zfs_main).

With zfs 2.4.0 zvol write can now be allocated to the special device also so I combine them.


If I keep the previous 2 pool setup then it's very inconvenience if I want to add more special_device to the hdd pool. I will first need to reduce the nvme pool then add to the hdd pool, which with zfs is never that straightforward. With 2.4.0, the idea is now this nvme can be shared much more easily.


I did try bcachefs but it has a lot of issue for me and now that it's no longer in the kernel it's not really appealing with the maintenance.