Recent content by Tachy

  1. T

    Proxmox 9 - IO error ZFS

    Thanks for the idea, we’ll try this and see whether we still get IO errors and ZFS errors. This feature could indeed be related to our issue since it was introduced in 2.3x / PVE 9. However, I don’t think the application running in the VM is performing O_DIRECT writes, as it’s a Grafana Mimir...
  2. T

    Proxmox 9 - IO error ZFS

    Hi everyone, We’ve been deploying several new Proxmox 9 nodes using ZFS as the primary storage, and we’re encountering issues where virtual machines become I/O locked. When it happens, the VMs are paused with an I/O error. We’re aware this can occur when a host runs out of disk space, but in...
  3. T

    [SOLVED] New nodes timeout on storage tabs

    Okay ! That was one Ceph storage we had at OVH that was not correctly added (OVH's APIs all buggy duh). Just did it again and the storage is available, hence every other device is available ! Thats nevertheless that every storage is considered not available when only one can't respond ? Maybe...
  4. T

    [SOLVED] New nodes timeout on storage tabs

    Awww... In the web interface I just tried to disable the Ceph storages for thoses nodes only and I've got no more timeouts. Left to figure out why I can't connect to my rbd storage :/
  5. T

    [SOLVED] New nodes timeout on storage tabs

    Found out I can list objects from local and NAS pvesm list NASVMRBX02 NASVMRBX02:2000/vm-2000-disk-1.qcow2 qcow2 214748364800 2000 NASVMRBX02:20001/vm-20001-disk-1.qcow2 qcow2 214748364800 20001 NASVMRBX02:30009/vm-30009-disk-1.qcow2 qcow2 214748364800 30009...
  6. T

    [SOLVED] New nodes timeout on storage tabs

    Hi ! Here is what I got on the old nodes pvesm status CEPHOVH01 rbd 1 5754024888 996693940 4757330948 17.82% CEPHRBX01 rbd 1 70291976760 7995822556 62296154204 11.88% NASISOGRA nfs 1 104857600 5296128 99561472 5.55% NASVMRBX01...
  7. T

    [SOLVED] New nodes timeout on storage tabs

    Hi there ! We just installed new nodes in our cluster, configured SSL, all good. But on the new nodes when I try to access infos from the plugged storages it displays a spinner for like a minute and then it says "Communication failure (0)". It only happens on the storages (even local), I can...
  8. T

    Trim virtual drives

    Nice ! Thanks for the feedback ! We created our own Ceph cluster for the disk storage and it just works great !
  9. T

    noVNC console keep resetting

    Hello everyone, I'm having trouble with the noVNC console that keep resetting. I can use it, and suddenly there is like a refresh, the console reload and I can use it again. But this reload happens, like, every five or ten seconds. It started happening after the first renewal of our Let's...
  10. T

    Trim virtual drives

    We are currently using the command given in the wiki to shrink the disks and we are having no problem, should we worry about it ? So is the compression incompatible with "preallocation=metadata" ?
  11. T

    Trim virtual drives

    Aww, too bad, thanks for the answer (: Maybe it would be a good idea to list witch protocols support trim in the wiki ? https://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files Have a good day !
  12. T

    Trim virtual drives

    We are already using qcow2 for virtual drives, so should it work with NFS I guess ? So far we meet every requirement.
  13. T

    Trim virtual drives

    Our storage is a NFS and ever doing the fstrim manually, it doesn't shrink the virtual drive
  14. T

    Trim virtual drives

    Thank you for the answer ! Here is the output of : qm config 20002 boot: cdn bootdisk: scsi0 cores: 2 description: ############ ide2: none,media=cdrom memory: 6144 name: ########### net0: e1000=C6:96:11:76:32:64,bridge=vmbr2 numa: 1 onboot: 1 ostype: l26 scsi0...
  15. T

    Trim virtual drives

    Hey everyone ! We were moving disks from a storage to another and noticed that when they arrive on the new storage, the thin provisioning expanded to full space. Before, when we had a few VMs, we could use the old method to empty the disks (using dd to fill the disk with zeros and deleting...