Hello, I've been using px 5.0 for some time, now I've installed 5.4 in a new server (OVH) and having some weird network issues using LXC containers.
I have a LXC container (ubuntu 18.04) with a public IP (eth0 to vmbr0, mac = virtual mac created on ovh panel, gw = host ip but ended on 254 as per ovh requirements) and it's been working for a week or so without any problem.
Now when I restart the container I can't ping or ssh to it.
At first I thought about some ovh routing because of the virtual mac or similar, but after some trial & error I've seen that if I rename network interface in px GUI (from eth0 to eth3 or whatever) it starts working... until the next reboot. If I rename it again from eth3 to eth0 it starts receiving network requests again.
I've tried disabling firewall (but it was enabled and working before) but no success.
Now i've seen the same problem with another LXC container which has only local ip (192.168.0.113) tied to vmbr1, so I would say it's not OVH related. In this case I can't event do a ping from the host to the container. But if I rename the container network interface in the GUI it starts to work.
Seems like something in px host or container is not started until it detects gui changes, or I've some routing missconfiguration but it's weird it starts to work when eth name changes.
Or maybe some problems because ubuntu switching to netplan? I haven't read about it before and now the /etc/network/interfaces on containers are empty...
I have also installed all updates (no subscription repo) on the host
It's using vmbr0 for public IPs and vmbr1 for private connections between containers.
Any idea or someone with same issues? I don't think I'm the only one with px 5.4 and ubuntu 18.04
I have a LXC container (ubuntu 18.04) with a public IP (eth0 to vmbr0, mac = virtual mac created on ovh panel, gw = host ip but ended on 254 as per ovh requirements) and it's been working for a week or so without any problem.
Now when I restart the container I can't ping or ssh to it.
At first I thought about some ovh routing because of the virtual mac or similar, but after some trial & error I've seen that if I rename network interface in px GUI (from eth0 to eth3 or whatever) it starts working... until the next reboot. If I rename it again from eth3 to eth0 it starts receiving network requests again.
I've tried disabling firewall (but it was enabled and working before) but no success.
Now i've seen the same problem with another LXC container which has only local ip (192.168.0.113) tied to vmbr1, so I would say it's not OVH related. In this case I can't event do a ping from the host to the container. But if I rename the container network interface in the GUI it starts to work.
Seems like something in px host or container is not started until it detects gui changes, or I've some routing missconfiguration but it's weird it starts to work when eth name changes.
Or maybe some problems because ubuntu switching to netplan? I haven't read about it before and now the /etc/network/interfaces on containers are empty...
I have also installed all updates (no subscription repo) on the host
Code:
proxmox-ve: 5.4-1 (running kernel: 4.15.18-14-pve)
pve-manager: 5.4-5 (running version: 5.4-5/c6fdb264)
pve-kernel-4.15: 5.4-2
pve-kernel-4.15.18-14-pve: 4.15.18-38
pve-kernel-4.15.18-12-pve: 4.15.18-36
.....
It's using vmbr0 for public IPs and vmbr1 for private connections between containers.
Code:
auto vmbr0
iface vmbr0 inet static
address 188.***.***.117/24
gateway 188.***.***.254
bridge_ports eth0
bridge_stp off
bridge_fd 0
auto vmbr1
iface vmbr1 inet static
address 192.168.0.1
netmask 255.255.0.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.0.0/16' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.0.0/16' -o vmbr0 -j MASQUERADE
Any idea or someone with same issues? I don't think I'm the only one with px 5.4 and ubuntu 18.04