I think the issue is not the vdev redundancy level. The same would happen even with the following pool layout which should be sane.
3xMirror special device
Raidz3 8xhdd vdev
ZFS 2.4 specially allow zvol write to land on the special vdev so it's...
Basically the issue is:
My ssd has 780G available
My HDD has 1.6TB available
But free space is only 450gb available.
My expectation is it should say 1.6TB available since that's how much data my "data" vdev can hold.
Compressions is zstd-4...
Well ZFS allow a special_device to store metadata as well as small file to speed up the pool. Previously, I split my NVME in 2, 1800G for a nvme only pool for zvol and 200GB as special for the HDD pool (zfs_main).
With zfs 2.4.0 zvol write can...
Distribution Name | Proxmox 9.1.5 Debian Trixie 13
Distribution Version | Proxmox 9.1.5 Debian Trixie 13
Kernel Version | Linux 6.17.9-1-pve
Architecture | x86_64
OpenZFS Version | zfs-2.4.0-pve1 - zfs-kmod-2.4.0-pve1
My zpool has 2 device: sdb...
That's true, though per the incus thread it seems OpenSuse QEMU works so it seems like it's fixable per QEMU side. I'll file an issue.
About performance, since I haven't been able to actually get it working to my liking, I'm currently just...
My Hardware:
Ryzen 7 7800X3D
96GB DDR5
4TB NVME
Onboard 2.5G Lan
X520-DA2 2x10G SFP+
The machine been running stable and I can use VMs with VMware and HyperV with 32GB of ram just fine. Although I can't seem to enable Nested Virtualization with...
So after scouring through all previous forum posts, I still can't seem to find a clear answer on how Proxmox actually deal with Numa nodes.
My setup
Epyc 7663 56c112t (yes it might be a bit unoptimal, I will try to switch to a 7b13 with full...