[SOLVED] Container network not working until interface renamed

JamesYS

Member
May 9, 2019
3
1
8
44
Hello, I've been using px 5.0 for some time, now I've installed 5.4 in a new server (OVH) and having some weird network issues using LXC containers.

I have a LXC container (ubuntu 18.04) with a public IP (eth0 to vmbr0, mac = virtual mac created on ovh panel, gw = host ip but ended on 254 as per ovh requirements) and it's been working for a week or so without any problem.

Now when I restart the container I can't ping or ssh to it.
At first I thought about some ovh routing because of the virtual mac or similar, but after some trial & error I've seen that if I rename network interface in px GUI (from eth0 to eth3 or whatever) it starts working... until the next reboot. If I rename it again from eth3 to eth0 it starts receiving network requests again.

I've tried disabling firewall (but it was enabled and working before) but no success.

Now i've seen the same problem with another LXC container which has only local ip (192.168.0.113) tied to vmbr1, so I would say it's not OVH related. In this case I can't event do a ping from the host to the container. But if I rename the container network interface in the GUI it starts to work.

Seems like something in px host or container is not started until it detects gui changes, or I've some routing missconfiguration but it's weird it starts to work when eth name changes.
Or maybe some problems because ubuntu switching to netplan? I haven't read about it before and now the /etc/network/interfaces on containers are empty...

I have also installed all updates (no subscription repo) on the host
Code:
proxmox-ve: 5.4-1 (running kernel: 4.15.18-14-pve)
pve-manager: 5.4-5 (running version: 5.4-5/c6fdb264)
pve-kernel-4.15: 5.4-2
pve-kernel-4.15.18-14-pve: 4.15.18-38
pve-kernel-4.15.18-12-pve: 4.15.18-36
.....

It's using vmbr0 for public IPs and vmbr1 for private connections between containers.

Code:
auto vmbr0
iface vmbr0 inet static
    address 188.***.***.117/24
    gateway 188.***.***.254
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0

auto vmbr1
iface vmbr1 inet static
    address  192.168.0.1
    netmask  255.255.0.0
    bridge_ports none
    bridge_stp off
    bridge_fd 0
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '192.168.0.0/16' -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '192.168.0.0/16' -o vmbr0 -j MASQUERADE

Any idea or someone with same issues? I don't think I'm the only one with px 5.4 and ubuntu 18.04
 
Or maybe some problems because ubuntu switching to netplan? I haven't read about it before and now the /etc/network/interfaces on containers are empty...


Do you use our container templates? In our Ubuntu 18.04 network configuration is defined in /etc/systemd/network/eth0.network and netplan is not used.


Check also the settings which are in effect by
Code:
ip addr

If everything looks ok and it still does not work follow the packets by
Code:
tcpdump -e -n -i eth0

etc. in order to figure out where packets get lost.
 
Sorry for the late reply, problem was solved...
I had wrong GW on lxc containers and the behaviour was so strange that I didn't notice it. Just updated thread title to [SOLVED]

ps: yes, I use your templates, good to know netplan is not used there :p
 
  • Like
Reactions: hoba

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!