Search results

  1. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    I think the issue is not the vdev redundancy level. The same would happen even with the following pool layout which should be sane. 3xMirror special device Raidz3 8xhdd vdev ZFS 2.4 specially allow zvol write to land on the special vdev so it's kinda odd that this missed testing since it's the...
  2. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    I already did a few days ago but no activity, was looking to see if anyone has the same issue.
  3. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    Here's the output root@beta:~# zfs list -o name,quota,reservation NAME QUOTA RESERV rpool none none rpool/ROOT none none rpool/ROOT/pve-1 none none rpool/data...
  4. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    You can see in the same image, the below command shows zfs_main dataset available is only 448gb
  5. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    Basically the issue is: My ssd has 780G available My HDD has 1.6TB available But free space is only 450gb available. My expectation is it should say 1.6TB available since that's how much data my "data" vdev can hold. Compressions is zstd-4. My data makeup is normal, there is no snapshot or...
  6. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    Ah forgot to mention, it's kinda like my backup pool. My main pool is mirror special device + raidz1 4xhdd
  7. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    Well ZFS allow a special_device to store metadata as well as small file to speed up the pool. Previously, I split my NVME in 2, 1800G for a nvme only pool for zvol and 200GB as special for the HDD pool (zfs_main). With zfs 2.4.0 zvol write can now be allocated to the special device also so I...
  8. G

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    Distribution Name | Proxmox 9.1.5 Debian Trixie 13 Distribution Version | Proxmox 9.1.5 Debian Trixie 13 Kernel Version | Linux 6.17.9-1-pve Architecture | x86_64 OpenZFS Version | zfs-2.4.0-pve1 - zfs-kmod-2.4.0-pve1 My zpool has 2 device: sdb (HDD) and sda (nvme) The issue is all dataset...
  9. G

    Proxmox on HyperV, start VM crash if more than 2815MB of memory

    That's true, though per the incus thread it seems OpenSuse QEMU works so it seems like it's fixable per QEMU side. I'll file an issue. About performance, since I haven't been able to actually get it working to my liking, I'm currently just migrating some VMs directly to HyperV. I'm not gonna...
  10. G

    Proxmox on HyperV, start VM crash if more than 2815MB of memory

    Hi, sorry for the lack of information in the post, I updated with more information :)
  11. G

    Proxmox on HyperV, start VM crash if more than 2815MB of memory

    My Hardware: Ryzen 7 7800X3D 96GB DDR5 4TB NVME Onboard 2.5G Lan X520-DA2 2x10G SFP+ The machine been running stable and I can use VMs with VMware and HyperV with 32GB of ram just fine. Although I can't seem to enable Nested Virtualization with VMware). Windows 11 25H2 HyperV enabled Nested...
  12. G

    Epyc 7003 CCX L3 as 8 NUMA configuration

    So after scouring through all previous forum posts, I still can't seem to find a clear answer on how Proxmox actually deal with Numa nodes. My setup Epyc 7663 56c112t (yes it might be a bit unoptimal, I will try to switch to a 7b13 with full 64c128t later) 8x64gb ddr4 512gb total BIOS set to...
  13. G

    Proxmox VE 8.2.2 - High IO delay

    Following, I have the same issue but for me it happens periodically/randomly. IO wait will spike for a few minutes to 40% then stop. On 2x2TB NVME. PVE 8.3 kernel 6.8.12-4
  14. G

    RBD Cache to offset consumer NVME latency for an uptime prioritized cluster (data consistency lower priority)

    Hi everyone, so I have a proxmox cluster with zfs replication on consumer NVMEthat I'm planning to change into Ceph. The cluster host multiple VMs that require high uptime so users can log in and do their work, the user data is on an NFS (also on VM). The data is backup periodically and I am ok...
  15. G

    /var/tmp/espmounts/7276-6706/EFI/proxmox/6.5.13-6-pve: No space left on device when installing new kernel

    I have a single node install with PVE 8.2.4 and ZFS Mirror as boot drive. I was doing apt update and was suggested to do a apt --fix-broken install. I ran that and got this: root@pve:~# apt --fix-broken install Reading package lists... Done Building dependency tree... Done Reading state...
  16. G

    Proxmox CPU allocation with P and E cores?

    Hey guys, how does Proxmox handle the P and E cores when allocating? For example, on a 13900K, I will have 8p and 16e => 36threads => 36 vCPU. When creating a VM, I input 8 cores for the VM. How would that behave performance wise? Will it get maximum 8 P cores? Or will it use CPU % like 8/36...
  17. G

    ZFS mirror various disk size

    Hi Aaron, I'm planning to do the same: 1x2TB NVME 1x4TB NVME Mirroring 100GB from each and leave the rest unpartition so I can use them for other things. For this, based on your instruction, I would have to: 1. Choose the 2TB for ZFS RAID0 during installation 2. Use hdsize = 100GB 3...
  18. G

    Installing Proxmox Root on LVM-Thin

    Hi, I was wondering if it's possible to install Proxmox and have the root partition be on lvm-thin instead of think lvm like default. Wouldn't that be a better approach since now the root can take as much or as little storage as needed from storage?
  19. G

    Alder Lake i5 12400, Windows 11 guest with WSL Nested Virtualization not working

    Hi guys, I'm trying to run Windows 11 Guest with WSL2. My config is the following: i5 12400 Proxmox 7.4-3 Kernel 5.15.104-1 Windows 11 Guest - CPU type is Host Nested Virtualization is enabled (Checked with cat /sys/module/kvm_intel/parameters/nested) When booting into Windows 11, Task manager...