I had one more system which had an ifupdown2 issue after upgrade to Trixie. All 5 systems with ifupdown2 bug were solved with removing comments from /etc/network/interfaces . specifically the comment lines had a interface name within the...
I solved this . the node that would have migrations fail to and from needed an adjustment in /etc/network/interfaces .
the other 4 nodes have vmbr3 set as:
auto vmbr3
iface vmbr3 inet static
address 10.1.10.7/24
gateway...
Yes, it’s possible. The solution is to use physical disk passthrough to the VM.
Instead of adding the drive as Proxmox storage, you can pass the whole disk directly to a VM.
The VM will then see it as if it were physically attached, and you can...
from a node that is okay to migrate live to:
root@pve2:[~]:# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3...
here is a migration that worked to an old node:
task started by HA resource agent
2025-08-21 15:40:39 conntrack state migration not supported or disabled, active connections might get dropped
2025-08-21 15:40:39 starting migration of VM 107 to...
I have 5 nodes. pve was recently installed on 2 of the nodes.
I can live migrate to the 3 older nodes.
The 2 newer nodes get this warning [ sometimes , sometimes no warning ] : conntrack state migration not supported or disabled, active...
I also noticed the ext4 part. However the issue starts 100% of the time when a node is in the process of shutting down for a reboot - shortly after the point it turns off osd , the hang starts at some or all of the remaining nodes.
I did not...
so with no node restarts there have been no hangs on pve nodesand kvm's .
I have this set in sysctl.d since 2019 per
https://tracker.ceph.com/projects/ceph/wiki/Tuning_for_All_Flash_Deployments#Sample-sysctlconf
could these be causing an...
also one thing I ran into is that I tried to reboot the system after removing one of the rpool drives. I was unable to run zpool attach because for some reason the newly installed drive was in use. so i tried to reboot. reboot failed ...
You might try turning on csm and legacy boot in BIOS to see if you can get it booted that way. If that works, check this thread.
https://forum.proxmox.com/threads/help-with-uefi-boot-issue-after-pve-9-upgrade.169796/
in the last 6 hours no new hangs occurred. [ hangs = blocked for more than 122 seconds ] . i call hang because when that occurs keyboard hangs. certainly inside a KVM , not sure if at PVE cli ]
there are 3 nodes and a few kvm;s with hangs in...
from the new node. note we have not moved the subscription over so it is using testing
# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3...
using pve 9.0.5 . 5 node ceph cluster. nodes have a mix of zfs and non zfs root/boot disks along with one large nvme formatted ext4 for vzdumps. we also use pbs .
i have a cronscript which we have used for years that checks this:
dmesg -T |...