LXC unreachable after change of host interfaces

abonilla

Member
Oct 26, 2020
3
0
6
42
Hello -

I changed my hosts network from a single 1GB interface to bonded 10GbE interfaces. I then used the same adapted config to run my systems. Since then, VMs work great but containers are unable to reach the network. Rebooted all nodes and running Virtual Environment 6.2-12


I simply use vmbr0 for everything since I've got a fairly simple configuration. Any idea why my LXCs won't reach the network? (even newly created ones)

auto lo iface lo inet loopback iface eno1 inet manual auto enp65s0f0 iface enp65s0f0 inet manual auto enp65s0f1 iface enp65s0f1 inet manual iface eno2 inet manual iface eno3 inet manual iface eno4 inet manual auto bond0 iface bond0 inet manual bond-slaves enp65s0f0 enp65s0f1 bond-miimon 100 bond-mode balance-rr mtu 9000 auto vmbr0 iface vmbr0 inet static address 10.0.0.233/24 gateway 10.0.0.1 bridge-ports bond0 bridge-stp off bridge-fd 0 mtu 9000

and the interfaces for a host are shown as



6: enp65s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP group default qlen 1000 link/ether a0:36:9f:47:ba:e0 brd ff:ff:ff:ff:ff:ff 7: enp65s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP group default qlen 1000 link/ether a0:36:9f:47:ba:e0 brd ff:ff:ff:ff:ff:ff 8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000 link/ether a0:36:9f:47:ba:e0 brd ff:ff:ff:ff:ff:ff 9: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000 link/ether a0:36:9f:47:ba:e0 brd ff:ff:ff:ff:ff:ff inet 10.0.0.233/24 brd 10.0.0.255 scope global vmbr0 valid_lft forever preferred_lft forever inet6 2601:347:4200:4c10:a236:9fff:fe47:bae0/64 scope global dynamic mngtmpaddr valid_lft 299sec preferred_lft 299sec inet6 fe80::a236:9fff:fe47:bae0/64 scope link valid_lft forever preferred_lft forever 10: tap106i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000 link/ether aa:2d:b0:da:ef:0a brd ff:ff:ff:ff:ff:ff 11: tap105i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr105i0 state UNKNOWN group default qlen 1000 link/ether 7e:fe:c2:e6:9c:54 brd ff:ff:ff:ff:ff:ff 12: fwbr105i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000 link/ether 16:59:d6:ee:40:20 brd ff:ff:ff:ff:ff:ff 13: fwpr105p0@fwln105i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000 link/ether f6:e9:b9:c8:6b:fa brd ff:ff:ff:ff:ff:ff 14: fwln105i0@fwpr105p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr105i0 state UP group default qlen 1000 link/ether 16:59:d6:ee:40:20 brd ff:ff:ff:ff:ff:ff 15: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr103i0 state UNKNOWN group default qlen 1000 link/ether e6:69:f7:da:6c:59 brd ff:ff:ff:ff:ff:ff 16: fwbr103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000 link/ether 66:dd:d1:f5:8c:cf brd ff:ff:ff:ff:ff:ff 17: fwpr103p0@fwln103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000 link/ether 26:c6:6e:85:b0:39 brd ff:ff:ff:ff:ff:ff 18: fwln103i0@fwpr103p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr103i0 state UP group default qlen 1000 link/ether 66:dd:d1:f5:8c:cf brd ff:ff:ff:ff:ff:ff
 
hi,

I simply use vmbr0 for everything since I've got a fairly simple configuration. Any idea why my LXCs won't reach the network? (even newly created ones)
have you tried disabling the CT firewall?

can you post a container config? pct config CTID
 
Thanks for looking into it...

have you tried disabling the CT firewall?

Yes, it is. I recall this lxc worked fine before changing to the bond.

can you post a container config? pct config CTID
root@r620-2:~# pct config 101 arch: amd64 cores: 2 hostname: opensuse-lxc memory: 3096 nameserver: 10.0.0.10 net0: name=eth0,bridge=vmbr0,hwaddr=22:64:61:A1:86:FA,ip=dhcp,type=veth net1: name=bond0,bridge=vmbr0,firewall=1,hwaddr=FE:2D:32:A7:13:35,ip=dhcp,type=veth net2: name=vmbr0,bridge=vmbr0,hwaddr=26:B5:60:E8:4E:36,ip=dhcp,type=veth ostype: opensuse rootfs: local-lvm:vm-101-disk-0,size=24G searchdomain: 5glinux.com swap: 3096 unprivileged: 1 root@r620-2:~# cat /var/lib/lxc/101/config lxc.cgroup.relative = 0 lxc.cgroup.dir.monitor = lxc.monitor/101 lxc.cgroup.dir.container = lxc/101 lxc.cgroup.dir.container.inner = ns lxc.arch = amd64 lxc.include = /usr/share/lxc/config/opensuse.common.conf lxc.include = /usr/share/lxc/config/opensuse.userns.conf lxc.seccomp.profile = /usr/share/lxc/config/pve-userns.seccomp lxc.apparmor.profile = generated lxc.apparmor.raw = deny mount -> /proc/, lxc.apparmor.raw = deny mount -> /sys/, lxc.mount.auto = sys:mixed lxc.monitor.unshare = 1 lxc.idmap = u 0 100000 65536 lxc.idmap = g 0 100000 65536 lxc.tty.max = 2 lxc.environment = TERM=linux lxc.uts.name = opensuse-lxc lxc.cgroup.memory.limit_in_bytes = 3246391296 lxc.cgroup.memory.memsw.limit_in_bytes = 6492782592 lxc.cgroup.cpu.shares = 1024 lxc.rootfs.path = /var/lib/lxc/101/rootfs lxc.net.0.type = veth lxc.net.0.veth.pair = veth101i0 lxc.net.0.hwaddr = 22:64:61:A1:86:FA lxc.net.0.name = eth0 lxc.net.0.script.up = /usr/share/lxc/lxcnetaddbr lxc.net.1.type = veth lxc.net.1.veth.pair = veth101i1 lxc.net.1.hwaddr = FE:2D:32:A7:13:35 lxc.net.1.name = bond0 lxc.net.1.script.up = /usr/share/lxc/lxcnetaddbr lxc.net.2.type = veth lxc.net.2.veth.pair = veth101i2 lxc.net.2.hwaddr = 26:B5:60:E8:4E:36 lxc.net.2.name = vmbr0 lxc.net.2.script.up = /usr/share/lxc/lxcnetaddbr lxc.cgroup.cpuset.cpus = 12,16


and that host has

root@r620-2:~# brctl show bridge name bridge id STP enabled interfaces fwbr101i0 8000.da115666818f no fwln101i0 veth101i0 fwbr101i1 8000.fa74e4214cb2 no fwln101i1 veth101i1 fwbr101i2 8000.8ed94c76c990 no fwln101i2 veth101i2 fwbr104i0 8000.9e8a73c25d79 no fwln104i0 tap104i0 vmbr0 8000.a0369f45ea60 no bond0 fwpr101p0 fwpr101p1 fwpr101p2 fwpr104p0
 
you should try setting your bond-mode to active-backup instead of balance-rr and see if it works
 
balance-alb should also work. i guess rr didn't work in this setup if your switch ports aren't in the same trunk group.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!