Recent content by gogito

  1. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    I think the issue is not the vdev redundancy level. The same would happen even with the following pool layout which should be sane. 3xMirror special device Raidz3 8xhdd vdev ZFS 2.4 specially allow zvol write to land on the special vdev so it's kinda odd that this missed testing since it's the...
  2. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    I already did a few days ago but no activity, was looking to see if anyone has the same issue.
  3. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    Here's the output root@beta:~# zfs list -o name,quota,reservation NAME QUOTA RESERV rpool none none rpool/ROOT none none rpool/ROOT/pve-1 none none rpool/data...
  4. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    You can see in the same image, the below command shows zfs_main dataset available is only 448gb
  5. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    Basically the issue is: My ssd has 780G available My HDD has 1.6TB available But free space is only 450gb available. My expectation is it should say 1.6TB available since that's how much data my "data" vdev can hold. Compressions is zstd-4. My data makeup is normal, there is no snapshot or...
  6. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    Ah forgot to mention, it's kinda like my backup pool. My main pool is mirror special device + raidz1 4xhdd
  7. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    Well ZFS allow a special_device to store metadata as well as small file to speed up the pool. Previously, I split my NVME in 2, 1800G for a nvme only pool for zvol and 200GB as special for the HDD pool (zfs_main). With zfs 2.4.0 zvol write can now be allocated to the special device also so I...
  8. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    Distribution Name | Proxmox 9.1.5 Debian Trixie 13 Distribution Version | Proxmox 9.1.5 Debian Trixie 13 Kernel Version | Linux 6.17.9-1-pve Architecture | x86_64 OpenZFS Version | zfs-2.4.0-pve1 - zfs-kmod-2.4.0-pve1 My zpool has 2 device: sdb (HDD) and sda (nvme) The issue is all dataset...
  9. G

    Proxmox on HyperV, start VM crash if more than 2815MB of memory

    That's true, though per the incus thread it seems OpenSuse QEMU works so it seems like it's fixable per QEMU side. I'll file an issue. About performance, since I haven't been able to actually get it working to my liking, I'm currently just migrating some VMs directly to HyperV. I'm not gonna...
  10. G

    Proxmox on HyperV, start VM crash if more than 2815MB of memory

    Hi, sorry for the lack of information in the post, I updated with more information :)
  11. G

    Proxmox on HyperV, start VM crash if more than 2815MB of memory

    My Hardware: Ryzen 7 7800X3D 96GB DDR5 4TB NVME Onboard 2.5G Lan X520-DA2 2x10G SFP+ The machine been running stable and I can use VMs with VMware and HyperV with 32GB of ram just fine. Although I can't seem to enable Nested Virtualization with VMware). Windows 11 25H2 HyperV enabled Nested...
  12. G

    Epyc 7003 CCX L3 as 8 NUMA configuration

    So after scouring through all previous forum posts, I still can't seem to find a clear answer on how Proxmox actually deal with Numa nodes. My setup Epyc 7663 56c112t (yes it might be a bit unoptimal, I will try to switch to a 7b13 with full 64c128t later) 8x64gb ddr4 512gb total BIOS set to...
  13. G

    Proxmox VE 8.2.2 - High IO delay

    Following, I have the same issue but for me it happens periodically/randomly. IO wait will spike for a few minutes to 40% then stop. On 2x2TB NVME. PVE 8.3 kernel 6.8.12-4
  14. G

    RBD Cache to offset consumer NVME latency for an uptime prioritized cluster (data consistency lower priority)

    Hi everyone, so I have a proxmox cluster with zfs replication on consumer NVMEthat I'm planning to change into Ceph. The cluster host multiple VMs that require high uptime so users can log in and do their work, the user data is on an NFS (also on VM). The data is backup periodically and I am ok...