After migrating to another node there's no internet connection

menelaostrik

Member
Sep 7, 2020
24
0
6
38
Hi everyone,
i've been facing a rather bizzare issue on PVE6.4-15

I have a small 2-node cluster and everytime i migrate a VM to the other node, it doesn't have network connectivity anymore.
In order to solve the issue i have to remove the NICs and re-add them.
Afterwards i have to modify /etc/sysconfig/network-script/ifcfg-ens18 in order to edit the HWADDRESS(i have also tried it without the HWADDRESS field with the same results).
Then i have to restart the NetworkManager or the network service(depends on the OS).

any clues on what could be triggering this behavior?
 
Hello,

- May I ask you if the issue occurs also on the LXC container, or just on the VMs?
- Are you sure that the MAC address is not duplicated with another VM?
- Do you have a specific config on your network?
 
Hello,

- May I ask you if the issue occurs also on the LXC container, or just on the VMs?
- Are you sure that the MAC address is not duplicated with another VM?
- Do you have a specific config on your network?
Hi,
I don't use any LXC just a few VMs
Yes, i have double-checked that MACs are unique - moreover, the MAC address doesn't change when i migrate, it stays the same and it was working flawlessly before the migration.
Now for the last question i noticed something strange(strange like i don't remember changing the configuration like that) in /etc/network/interfaces
There are differenced in that configuration between the 2 nodes. One is setup as a normal linux brodge and the other as an OVSBriodge.
I'm pasting both of the files bellow:
on server1
Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto eno3
iface eno3 inet manual

auto eno4
iface eno4 inet manual

auto eno49
iface eno49 inet manual

auto eno50
iface eno50 inet manual

auto eno51
iface eno51 inet manual

auto eno52
iface eno52 inet manual

auto bond0
iface bond0 inet static
        address 10.10.10.10/24
        bond-slaves eno2 eno3 eno4 eno50 eno51
        bond-miimon 100
        bond-mode balance-rr

auto vmbr0
iface vmbr0 inet static
        address 78.108.36.201/28
        gateway 78.108.36.193
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0
on server2:
Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual
        ovs_type OVSPort
        ovs_bridge vmbr0

auto eno2
iface eno2 inet manual

auto eno3
iface eno3 inet manual

auto eno4
iface eno4 inet manual

auto eno49
iface eno49 inet manual

auto eno50
iface eno50 inet manual

auto eno51
iface eno51 inet manual

auto eno52
iface eno52 inet manual

auto bond0
iface bond0 inet static
        address 10.10.10.11/24
        bond-slaves eno2 eno3 eno4 eno50 eno51
        bond-miimon 100
        bond-mode balance-rr

auto vmbr0
iface vmbr0 inet static
        address 78.108.36.202/28
        gateway 78.108.36.193
        ovs_type OVSBridge
        ovs_ports eno1

auto vmbr1
iface vmbr1 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0

do you think that the difference in this configuration could be "breaking" the migration?
 
Hello,

Thank you for the output of the network configuration!

On which bridge is the VM configured? Can you please also provide us with the VM config qm config <VMID>?
 
Hello,

Thank you for the output of the network configuration!

On which bridge is the VM configured? Can you please also provide us with the VM config qm config <VMID>?
All VMs are configured on vmbr0. here is the config:
Code:
root@hyper:/etc/network# qm config 301
agent: 1,fstrim_cloned_disks=1
boot: order=scsi0
cores: 4
cpu: host
ide2: none,media=cdrom
memory: 8000
name: centos8
net0: virtio=4A:09:4F:42:82:CA,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsi0: local-zfs:vm-301-disk-0,size=32G,ssd=1
scsi1: local-zfs:vm-301-disk-1,size=60G,ssd=1
scsi2: hdd-zfs:vm-301-disk-0,iops_rd=150,iops_rd_max=250,iops_wr=150,iops_wr_max=250,size=4G
scsihw: virtio-scsi-pci
smbios1: uuid=a6f0d334-2e35-400b-b542-e7dab3c066f9
sockets: 1
vmgenid: da49b8e6-c2e3-40f6-8b60-2cee68b6f5ad
root@hyper:/etc/network#
 
Most likely because of the different network configuration between the nodes, to narrow down you can re-migrate the same VM who don't have a network to the main node and see if the network will back again.
 
As said, the network configuration on both nodes should be the same (or similar at least) to avoid such this issues.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!