Before restarting a cluster node following the last update , only one VM reported error in the manual migration :
2026-02-08 08:07:40 migration status: completed
2026-02-08 08:07:40 ERROR: tunnel replied 'ERR: resume failed - VM 117 qmp command...
@fiona Is there anything we can do to support resolving this? or do you just need time? or should we talk with the qemu team?
If there is a test build, please grab me to test.
but such things will really keep proxmox far from "production ready" unless it is stated somewhere (maybe it is, I did not see it): "LXC are NOT for production - just for fun" ;-)
but such things will really keep proxmox far from "production ready" unless it is stated somewhere (maybe it is, I did not see it): "LXC are NOT for production - just for fun" ;-)
During the last big break (lxc docker), I think it was dissuaded from us using containers to host mission critical workloads and to use VMs instead for anything that must not go down. But it is difficult due to ram shortages right now...
how come such bugs happen BEFORE releasing the new version? bloody hell, testing in proxmox seems to be nowhere ;-(
no, I am not complaing - just stating the fact
anyway, thanks to all who posted the workaround
how come such bugs happen BEFORE releasing the new version? bloody hell, testing in proxmox seems to be nowhere ;-(
no, I am not complaing - just stating the fact
anyway, thanks to all who posted the workaround
I don't know if this was implied, but also software defined storage is a requirement so that you can quota the used storage for your clients / customers / tenants.
What is missing to turn this into a fully fledged setup are:
RAM restrictions...
copy folder and all fiels from
/var/lib/pmg/templates
to
/etc/pmg/templates
Edit the files: /etc/pmg/templates/main.cf.in
at the smtpd_sender_restrictions part,
add below content:
check_sender_access regexp:/etc/postfix/Blocked_Sender_Domain...
Hey,
Just setting up PBS with a dell tl2000 with 2 sas lto5 drives connected to the server. I have updated the firmware of the TL2000 and rebooted the server in order to troubleshoot.
When I try to run proxmox-tape barcode-label --pool I get the...
That's what I figured but is not what happened here. Had a node already off, lost another for an unknown reason, booted back the node that I had previously shut down, another node went down, then when I turned on the node that originally went...
Wir können dir ja nicht sagen was dein Server oder deine VMs um 4 Uhr Nachts tun, das kannst nur Du selbst wissen. Du hast ja nicht einmal mitgeteilt um was für Container oder VMs es sich handelt, was darauf läuft usw. usf.
Fahre den doch nachts...
You can lose 2 nodes but the cluster does not quorate anymore and thus VMs "stand still" because noone "knows" if whats happening is what is supposed to happen in the cluster :) Thats the reason there is a quorum in the first place, so that...
I am currently in the process of doing that. Most of my VMs got corrupted in this incident so need to rebuild them all, some from backups, some I am able to repair. It's a pain. Still have no idea why this happened but from what I've been told on...