Search results

  1. G

    [SOLVED] Timesyncd

    Oh, ok. My fault. Thank you @t.lamprecht !
  2. G

    [SOLVED] Timesyncd

    Hi, I see that systemd-timesyncd is not installed by default. I was questioning myself, what if I do install it? Isn't timekeeping important for cluster operations? Thank you
  3. G

    [SOLVED] Workaround to use qemu-user-agent without guest-fsfreeze ?

    Hi @mira , thanks for your feedback. I added myself to the CC list, hope it's going to be released soon.
  4. G

    Disable fs-freeze on snapshot backups

    Hi @cheiss , thanks for your kind reply. Unfortunately, solutions lowering the security level are not an option in my organization. I rather lose functionality than security. I commented on the feature request, I hope they publish the feature soon.
  5. G

    Disable fs-freeze on snapshot backups

    Is it possible to somehow disable the call to guest-agent fsfreeze when performing a snapshot backup? Despite the update to latest Proxmox VE 7.3-4 and opt-in 5.19.17-2-pve kernel, I'm still having the issue where the fsfreeze command blocks the guest filesystem and there's no solution except...
  6. G

    [SOLVED] Workaround to use qemu-user-agent without guest-fsfreeze ?

    Hi @mira , I successfully migrated to Proxmox 7.3 & kernel 5.19.17-2-pve. Everything went smoothly, including live-migrations. Unfortunately, though, the original issue still happens with CentOS 7.9 guests. Any clue?
  7. G

    [SOLVED] Workaround to use qemu-user-agent without guest-fsfreeze ?

    Also, what if I pause a VM and then resume it on the new node? it's always better than stop/start.
  8. G

    [SOLVED] Workaround to use qemu-user-agent without guest-fsfreeze ?

    Hi @mira , you said I will not be able to live migrate VMs from nodes running kernel 5.13 to nodes running kernel 5.19. What if I update the kernel in steps, e.g. 5.13 -> 5.15 -> 5.19 ? Thank you
  9. G

    [SOLVED] Workaround to use qemu-user-agent without guest-fsfreeze ?

    Hi @mira , yes, the kernel is still version 5.13.19-6-pve
  10. G

    [SOLVED] Workaround to use qemu-user-agent without guest-fsfreeze ?

    Hi @mira , just to inform you that a VM with `threads` Async I/O and VirtIO SCSI Single controller just froze in the same way during the backup.
  11. G

    [SOLVED] Workaround to use qemu-user-agent without guest-fsfreeze ?

    Hi @mira , thanks a lot! We're planning to upgrade to Proxmox VE 7.3 soon, so we'll take the opportunity to upgrade the kernel as well. I'm marking the thread as [SOLVED].
  12. G

    [SOLVED] Workaround to use qemu-user-agent without guest-fsfreeze ?

    Hi @mira Yes, I use io_uring by default. Better using another I/O mode? I can't test with different kernel at the moment, unless I'm sure the live migrate bug described in this thread is fixed. I'd like to do but I can't. Do you know anything about that? Thank you!
  13. G

    [SOLVED] Workaround to use qemu-user-agent without guest-fsfreeze ?

    Hi @mira, thanks for replying. I'm currently running PVE 7.2-1 on kernel 5.13.19-6-pve, because of another issue I had, which impacted live migrate of VMs between different CPUs. I'm unsure if I can safely upgrade PVE to 7.3 and/or update the kernel to the latest version (I'd be happy to do...
  14. G

    [SOLVED] Workaround to use qemu-user-agent without guest-fsfreeze ?

    Hi all, I think we're all aware of the the issue with the snapshot backup locking the guest FS with fsfreeze and not being able to thaw the FS, resulting in a completely locked VM. It's been discussed many times in and out of this forum. I don't know if there is any progress in fixing the very...
  15. G

    Second backup server best practices?

    Hi @Dunuin, thanks for your reply. GC jobs on the main datastore which uses about 17TB used to take ~2 days. Anyway, following @LnxBil 's advice yesterday I added some SSD to the QNAP and activated the autotiering feature, and boom! Latest GC took only 4 hours. I don't see any "IO delay" in...
  16. G

    Second backup server best practices?

    I have no bandwidth or latency issues on the storage server. What makes you think that?
  17. G

    Second backup server best practices?

    Thanks for your replies. Sorry, maybe I explained myself badly. I don't need replication or IOPS, I need more processing performance. Sure I could upgrade the current server in terms of CPU and RAM but I'd rather add another server and make them work in parallel if possible.
  18. G

    Second backup server best practices?

    Hi all, I'm currently running one PBS for my cluster, which stores data on an NFS share backed by an enterprise-grade QNAP storage. Everything went fine until about two weeks ago, when I noticed some scheduled backups were starting to file sometimes. Also, I see that when I browse VM backups...
  19. G

    [SOLVED] Optimal number of Ceph monitor/manager/MDS

    Thank you very much, @BenediktS ! So: I'm gonna stick with 5 monitors, maybe I'll distribute them better to have a more resilient configuration I understand managers are fine I'm not using CephFS, virtual disks only. So I guess I don't need MDS at all?