Proxmox 5 migrated from 4 lxc networking not working

John Wright

Member
Mar 8, 2019
4
0
6
46
Hi,

I have recently installed a new Proxmox 5 node to our Current Proxmox 4 cluster

We migrated one of the lxc VMs and at first it seemed to work fine, then after a few minutes it stopped responding to ping ssh etc

I can console in, but i can not get out to the internet.

The networking card seems to be sending and receiving packages.

I have had the routers and IPs checked by OVH.

Firewall is disabled.

Same thing happens to other machines that I migrate all VMs on Ubuntu 18.04.01 LTS

proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve)
pve-manager: 5.3-11 (running version: 5.3-11/d4907f84)
pve-kernel-4.15: 5.3-2
pve-kernel-4.15.18-11-pve: 4.15.18-34
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-47
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-12
libpve-storage-perl: 5.0-38
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-23
pve-cluster: 5.0-33
pve-container: 2.0-35
pve-docs: 5.3-3
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-18
pve-firmware: 2.0-6
pve-ha-manager: 2.0-8
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 2.12.1-2
pve-xtermjs: 3.10.1-2
qemu-server: 5.0-47
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3

Any help would be appreciated!

Best Regards

John

Existing Node Proxmox 4 Networking Config

WorkingEG64 (1).png

Working Example VM on Proxmox 4 Networking

WorkingEG64-VM.png

New Node Proxmox 5 Networking Config (same as proxmox 4 node setup)

NewHGServer.png

Migrated VM Networking (No Changes when it was working on Proxmox 4)

NewHGServer-VM-Networking-not-working.png

ifconfig output on broken VM

NewHGServer-VM-Networking.png
 

Attachments

  • WorkingEG64.png
    WorkingEG64.png
    245.9 KB · Views: 4
Last edited:
Hi,

I have recently installed a new Proxmox 5 node to our Current Proxmox 4 cluster

We migrated one of the lxc VMs and at first it seemed to work fine, then after a few minutes it stopped responding to ping ssh etc

I can console in, but i can not get out to the internet.

The networking card seems to be sending and receiving packages.

I have had the routers and IPs checked by OVH.

AFAICS you use for the VMs bridges which are directly connected to physical networks. In such a case it works only if your provider accept the virtual MAC addresses from the VM. Since it seems that packets are sent out from the NICs (but no response from internet servers) it's very probable that the provider does not accept the MAC addresses - they may have been changed during the migration process (depends on how you made it exactly) or your VMs are now connected to other physical ports.

Try to reconfigure all MAC addresses as they were before migration and all VMs should run in the same node as before.
 
Hi,

I'm not sure if I understand you correctly, as far as I know the provider has not changed the MAC address policy and the Virtual MAC addresses have not changed either, however just in case I have contacted OVH with your suggestion and am waiting on a response.

I have tried to remove and re-add the Virtual NICs on a different VM with the same issue, however this did not help.

Best Regards

John
 
Last edited:
Follow the packets by tcpdump from (virtual) NIC no NIC and figure out where they get lost.

If packets are sent correctly to the providers network and you don't get any response the reason can be:

- routing (gateway) is not configured properly - even it's less probable
- your provider drops the packets - in that case only he can clarify why

If a packet is sent to virtual network (e.g. from a VM) and not arriving at the other side (e.g. in host) the related configuration in Proxmox has to be investigated. In that case the easiest way is to run
Code:
pvereport
and posting the result.
 
Unfortunately creating the tcpdump command from NIC to NIC i'm not sure how to do

When I did try and run some tcpdump I found that it was the kernel that was dropping all of the packets e.g

Ubuntu Bionic Beaver (development branch) worker tty1

worker login: root
Password:
Last login: Sun Mar 10 22:59:30 UTC 2019 on lxc/tty1
Welcome to Ubuntu Bionic Beaver (development branch) (GNU/Linux 4.15.18-11-pve x86_64)

You have mail.
root@worker:~# tcpdump -v -i eth1
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
^C15:11:00.905409 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 5.39.56.161 tell 5.39.56.167, length 46

1 packet captured
189 packets received by filter
182 packets dropped by kernel
root@worker:~#

Below id a new response from OVH support

From: OVH Support
Dear John,

When IPs are routed to the vRack there is no virtual mac address on the IPs.
We only use vMAC when you route the IP block to the server (not when to the
vRack).

There are no changes to the routing at all on the IPs because nothing has
changed on network side. The only change is that you added 1 extra server to
the vRack. Hence why you see the IPs still work on your other servers in the
vRack.

The only change is that you are using different version of proxmox.

Here is a breakdown on how routing of public IPs work in vRack:
docs. ovh. com /gb /en /dedicated /ip-block-vrack /

On networking side I do not see any changes and as per my last response, you
could see that with iftop, I seen packet flow on eth1 (the interface I
configured 5.39.56.180 and that is the vRack port) including our monitoring
pings.

What I can do if you want, is I can deliver a temporary block of 4 IPs into
your vRack and I can test that with your dedicated server in rescue mode also.

For any other questions or concerns, please feel free to contact us through a
support ticket or through by phone at 0333 370 0425.

Adam O.
OVH UK Support
[1]

Also I have attached the output from pvereport on the host

Best Regards

John Wright
 

Attachments

Unfortunately creating the tcpdump command from NIC to NIC i'm not sure how to do
Let's assume you are e.g. in container 104 and want to follow pinging to 8.8.8.8

it starts in container (with an arp packet):
Code:
tcpdump -e -n -i eth1

Next station is vmbr1-port for this NIC in the host:

Code:
tcpdump -e -n -i veth104i1

Then the bridge

Code:
tcpdump -e -n -i vmbr1

Then enp61s0f1

Code:
tcpdump -e -n -i enp61s0f1

This is the point where the packet leaves your environment. If everything up to here is correct and you don't get any response the problem is in the external environment (router, switch etc.)

If you get a response follow it the other way round.

In the above example I did not take care of the firewall you have - recommended to disable it for those tests.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!