Search results

  1. J

    Memory leakage of one of my cluster nodes

    Here you have the output for pve3 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1824 root rt 0 560176 167292 52788 S 1.0 0.5 151:59.77 corosync 1858 root 20 0...
  2. J

    Memory leakage of one of my cluster nodes

    pve1 time read ddread ddh% dmread dmh% pread ph% size c avail 09:19:23 0 0 0 0 0 0 0 50G 53G 56G pve2 time read ddread ddh% dmread dmh% pread ph% size c avail 09:18:51 0 0 0 0 0 0 0...
  3. J

    Memory leakage of one of my cluster nodes

    Hello, Thank you for your reply. Yes ZFS. But both are identical servers - how can I check if all is going as expected ?
  4. J

    Memory leakage of one of my cluster nodes

    I have a cluster of 3 machines, of which 2 are somewhat identical. All running pve-manager/8.2.4/faa83925c9641325 The one that has no VMs running actively on it, has a memory usage of approx 50%. Whereas the other one with 2 VMs, has approx 18%. Something is clearly wrong but don't know how to...
  5. J

    HD Full

    @weehooey thank you for asking. I found the issue due to you and team here on the community. THANK YOU !
  6. J

    HD Full

    Thank you for your responses. Here are the requested items : The contents of /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,vztmpl,backup zfspool: local-zfs pool rpool/data content rootdir,images sparse 1 pbs: PBS2 datastore...
  7. J

    HD Full

    Effectively, I used fstab in the following way # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 UUID=31f52e69-408f-4a50-83fb-037f5e4eccdb /mnt/media ext4 defaults,noatime,nofail 0 2 I went and looked for the /mnt/synology which I don't find
  8. J

    HD Full

    here you go proxmox-backup-client failed: Error: unable to open chunk store 'Synology3' at "/mnt/synology/.chunks" - No such file or directory (os error 2) Name Type Status Total Used Available % HA-Storage zfspool active...
  9. J

    HD Full

    Thank you for your reply. Here are the results zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT HA-Storage 1.81T 52.0G 1.76T - - 8% 2% 1.00x ONLINE - rpool 236G 188G 47.5G - - 40%...
  10. J

    HD Full

    Dear all, Please provide some help because I'm unable to understand what's going on. I have a cluster of 3 servers, part in HA. All 3 have a zfs-pool on which I'm running PVE 8.2.2 On pve2 and pve3 - the disk space is approx 10G however for some reason on pve1, I'm getting at 182G Running du...
  11. J

    When a cluster node is lost is it possible to restart its VM on another node?

    @spirit how can you adjust this after setting up the cluster ?
  12. J

    When a cluster node is lost is it possible to restart its VM on another node?

    Thank you for your comments. @spirit How you can create a higher priority for PVE1 ?
  13. J

    GPU Pass through problems

    I removed the initial commented part of the file and got the following result initrd=\EFI\proxmox\6.5.11-6-pve\initrd.img-6.5.11-6-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt initcall_blacklist=sysfb_init pcie_acs_override=downstream,multifunction Also if i'm looking now at...
  14. J

    GPU Pass through problems

    Thank you for your quick reply. - Output is initrd=\EFI\proxmox\6.5.11-6-pve\initrd.img-6.5.11-6-pve #root=ZFS=rpool/ROOT/pve-1 boot=zfs. - It doesn't contain intel_iommu=on - VT-d is enabled on the motherboard BIOS - I assume that my motherboard supports VT-d as it's mentioned in the...
  15. J

    GPU Pass through problems

    Dear all, My purpose is to set up a Windows Server 2019 VM and using the NVIDIA GPU. Proxmox 8.1 boots on a ZFS Mirror My hardware Motherboard Z590 PRO4 GPU NVIDIA Corporation TU116 [GeForce GTX 1650] BIOS settings adjustments (listed), ensures that PVE doesn't use my NVIDIA GPU but the...
  16. J

    Issue setting up GPU Passthrough (Dual-GPU)

    Hello, @cayubweeums, I'm having the same issues. When I look at my grouping with pvesh get /nodes/pve1/hardware/pci --pci-class-blacklist "", all is grouped under -1 Where you able to get ride of the yellow banner stating "IOMMU detected, please activate it.See Documentation for further...
  17. J

    When a cluster node is lost is it possible to restart its VM on another node?

    I've setup HA for 1 VM and 1 LXC to test (also my main servers). PVE1 is my main server on which these VM/LXC were running on. I shut down PVE1 to see what would happen. PVE2 launched my LXC and PVE3 launched my VM. All works as expected. Now I restarted PVE1 and expected that my LXC and VM...
  18. J

    Install PBS on PVE Host?

    Were you in the end able to pass a few disk through to the pbs ? If so, can you please explain in a few words what you did ?