VM lost network after migrate to another node

andaga

Member
May 24, 2022
94
14
8
Hello!
Since the last update fews days ago (ProxMox community Repo) when i do a live migration between multiple nodes some VM's lost totally the network connection. Mean we can't ping the VM! After revert back the VM to the original node the network is back immediatly!

Reboot of the VM not solve the issue! Only revert back the VM to the original node.

We absolutly don't understand this issue.

Thank you for your help.

Regards

Anthony
 
could you please post

- pveversion -v
- VM config
- firewall and network details

thanks!
 
Hello Fabian

Code:
root@pve26:~# pveversion -v
proxmox-ve: 8.0.2 (running kernel: 6.2.16-19-pve)
pve-manager: 8.0.9 (running version: 8.0.9/fd1a0ae1b385cdcd)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.5
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2: 6.2.16-19
proxmox-kernel-6.2.16-18-pve: 6.2.16-18
proxmox-kernel-6.2.16-15-pve: 6.2.16-15
proxmox-kernel-6.2.16-14-pve: 6.2.16-14
proxmox-kernel-6.2.16-12-pve: 6.2.16-12
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx6
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.6
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.10
libpve-guest-common-perl: 5.0.5
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.8.2
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.4
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
openvswitch-switch: 3.1.0-2
proxmox-backup-client: 3.0.4-1
proxmox-backup-file-restore: 3.0.4-1
proxmox-kernel-helper: 8.0.5
proxmox-mail-forward: 0.2.1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.1
pve-cluster: 8.0.5
pve-container: 5.0.5
pve-docs: 8.0.5
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.0.7
pve-qemu-kvm: 8.1.2-2
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.8
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve3


Code:
root@pve20:~# pveversion -v
proxmox-ve: 8.0.2 (running kernel: 6.2.16-19-pve)
pve-manager: 8.0.9 (running version: 8.0.9/fd1a0ae1b385cdcd)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.5
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2: 6.2.16-19
proxmox-kernel-6.2.16-18-pve: 6.2.16-18
proxmox-kernel-6.2.16-15-pve: 6.2.16-15
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx6
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.6
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.10
libpve-guest-common-perl: 5.0.5
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.8.2
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.4
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
openvswitch-switch: 3.1.0-2
proxmox-backup-client: 3.0.4-1
proxmox-backup-file-restore: 3.0.4-1
proxmox-kernel-helper: 8.0.5
proxmox-mail-forward: 0.2.1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.1
pve-cluster: 8.0.5
pve-container: 5.0.5
pve-docs: 8.0.5
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.0.7
pve-qemu-kvm: 8.1.2-2
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.8
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve3



Code:
root@pve26:~# qm config 132
agent: 1
boot: order=scsi0
cores: 2
cpu: x86-64-v2-AES
ide0: PVESAN1:132/vm-132-cloudinit.qcow2,media=cdrom
ide2: none,media=cdrom
ipconfig0: ip=77.37.8.5/24,gw=77.37.8.1
memory: 4096
meta: creation-qemu=8.0.2,ctime=1694619714
name: XXXXXX
nameserver: 8.8.8.8 9.9.9.9
net0: virtio=A6:F2:01:75:C4:6C,bridge=vmbr1,firewall=1,tag=21
numa: 0
ostype: l26
scsi0: PVESAN1:132/vm-132-disk-0.qcow2,iothread=1,size=80G
scsihw: virtio-scsi-single
serial0: socket
smbios1: uuid=9a58ff43-481e-4463-96b8-37e0effa25c4
sockets: 1
vmgenid: 89af21a4-e5a6-4cd0-a95b-ddfeea34978b




Code:
[OPTIONS]

enable: 1

[RULES]

IN Ping(ACCEPT) -log nolog
IN ACCEPT -p tcp -dport 6556 -log nolog
IN ACCEPT -p udp -dport 4789 -log nolog
IN ACCEPT -p tcp -dport 22 -log nolog
IN ACCEPT -p tcp -dport 8006 -log nolog



Code:
PVE26

auto lo
iface lo inet loopback

iface eno3 inet manual

iface eno4 inet manual

iface eno2 inet manual

iface idrac inet manual

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 10.0.2.26/24
        gateway 10.0.2.1
        bridge-ports eno3
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet manual
        bridge-ports eno4
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

source /etc/network/interfaces.d/*




Code:
PVE20

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto enp2s0f0
iface enp2s0f0 inet manual

auto enp2s0f1
iface enp2s0f1 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves enp2s0f0 enp2s0f1
        bond-miimon 100
        bond-mode balance-rr

auto bond1
iface bond1 inet manual
        bond-slaves eno1 eno2
        bond-miimon 100
        bond-mode balance-rr

auto vmbr0
iface vmbr0 inet static
        address 10.0.2.20/24
        gateway 10.0.2.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet manual
        bridge-ports bond1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

source /etc/network/interfaces.d/*
 
Last edited:
if you migrate a VM, can the VM itself communicate with the outside world?
 
The issue is not about all vm's! It's some VM's!

But the VM with this problem and totally no traffic! No internet, nothing!

Before 2-3 days ago where I have applied the last update I have no issues! I was able to migrate all vm's into all nodes without issues! But from this last update I have this issue...
 
Last edited:
what differentiates the affected VMs from those which are not?
 
Nothing! All the same network conf!

Just migrate again this vm from PVE26 to PVE24 and after to PVE20... No issue this time! This issue is really sometimes and VM different each time!
 
Last edited:
I'm updating all the last packages / updates released today! I will try with this...
 
Right now I have a vm don't want connect to network after migrate!
Which command I can try for debug?
 
Right now I have a vm don't want connect to network after migrate!
Which command I can try for debug?

the usual network debugging - attempt a connection, and watch with tcpdump along the (expected) path to see where the traffic gets lost..
 
We have maybe found the issue about the connection lost on some VM's!

We are thinking about network configuration for bridge with 4 ports into config balance-rr
We have switched now to active-backup on some nodes and other with LACP layer 3+4 on some others!

We think after 2 ports on balance-rr mean 3 ports or more... the ProxMox are totally lost!

Any issues already seen with this config?
 
No! All the nodes have the same config and same vlan available!

The problem was really remaining not only on migrate but also on vm on a node normally! Sometimes network stuck sometimes not! We have spent 2 days with multiple testing about network config! About diagnose all the traffic between ISP, Firewall, switch, proxmox nodes and guest!
Hundreds of wireshark log and try!

Finally after changing the balance-rr to active-backup or LACP now everything seams to work normally!

But on the nodes with only 2 ports on bonding + bridge everything was working fine! Only the nodes with more than 2 ports in bonding mode balance-rr was failing and creating issue...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!