Hi,
I see that systemd-timesyncd is not installed by default. I was questioning myself, what if I do install it? Isn't timekeeping important for cluster operations?
Thank you
Hi @cheiss ,
thanks for your kind reply. Unfortunately, solutions lowering the security level are not an option in my organization. I rather lose functionality than security.
I commented on the feature request, I hope they publish the feature soon.
Is it possible to somehow disable the call to guest-agent fsfreeze when performing a snapshot backup?
Despite the update to latest Proxmox VE 7.3-4 and opt-in 5.19.17-2-pve kernel, I'm still having the issue where the fsfreeze command blocks the guest filesystem and there's no solution except...
Hi @mira ,
I successfully migrated to Proxmox 7.3 & kernel 5.19.17-2-pve. Everything went smoothly, including live-migrations.
Unfortunately, though, the original issue still happens with CentOS 7.9 guests.
Any clue?
Hi @mira ,
you said I will not be able to live migrate VMs from nodes running kernel 5.13 to nodes running kernel 5.19.
What if I update the kernel in steps, e.g. 5.13 -> 5.15 -> 5.19 ?
Thank you
Hi @mira ,
thanks a lot! We're planning to upgrade to Proxmox VE 7.3 soon, so we'll take the opportunity to upgrade the kernel as well.
I'm marking the thread as [SOLVED].
Hi @mira
Yes, I use io_uring by default. Better using another I/O mode?
I can't test with different kernel at the moment, unless I'm sure the live migrate bug described in this thread is fixed. I'd like to do but I can't. Do you know anything about that?
Thank you!
Hi @mira, thanks for replying.
I'm currently running PVE 7.2-1 on kernel 5.13.19-6-pve, because of another issue I had, which impacted live migrate of VMs between different CPUs. I'm unsure if I can safely upgrade PVE to 7.3 and/or update the kernel to the latest version (I'd be happy to do...
Hi all,
I think we're all aware of the the issue with the snapshot backup locking the guest FS with fsfreeze and not being able to thaw the FS, resulting in a completely locked VM. It's been discussed many times in and out of this forum.
I don't know if there is any progress in fixing the very...
Hi @Dunuin, thanks for your reply.
GC jobs on the main datastore which uses about 17TB used to take ~2 days. Anyway, following @LnxBil 's advice yesterday I added some SSD to the QNAP and activated the autotiering feature, and boom! Latest GC took only 4 hours.
I don't see any "IO delay" in...
Thanks for your replies.
Sorry, maybe I explained myself badly. I don't need replication or IOPS, I need more processing performance.
Sure I could upgrade the current server in terms of CPU and RAM but I'd rather add another server and make them work in parallel if possible.
Hi all,
I'm currently running one PBS for my cluster, which stores data on an NFS share backed by an enterprise-grade QNAP storage.
Everything went fine until about two weeks ago, when I noticed some scheduled backups were starting to file sometimes. Also, I see that when I browse VM backups...
Thank you very much, @BenediktS !
So:
I'm gonna stick with 5 monitors, maybe I'll distribute them better to have a more resilient configuration
I understand managers are fine
I'm not using CephFS, virtual disks only. So I guess I don't need MDS at all?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.