PVE7 - Local bridges not working with IPv6 ULA

Jun 13, 2018
7
1
8
48
Hi proxmox people ! :-)

The way bridges work has really changed in Debian11 / Proxmox 7.

When creating a bridge without a slave interface, it works with a private IPv4. But if we assign it an IPv6 of ULA type (fdxx: xxxx: xxxx: xxxx ...), it is impossible to ping this bridge on IPv6.

The ip link command shows that bridges without a physical interface are considered in the DOWN state (NO-CARRIER).

To work around the problem, I had to create dummy interfaces and assign them as local bridge slaves. With this it works fine again.

Is this now normal behavior ?
 
All tests are done on fresh installations of PVE7 on many servers, so yes, it's ifupdown2

Extract of a /etc/network/interfaces:
Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto enp1s0
iface enp1s0 inet manual

auto enp2s0
iface enp2s0 inet manual

auto dummy0
iface dummy0 inet manual

auto dummy1
iface dummy1 inet manual

auto vmbr0
iface vmbr0 inet static
    address xx.xx.xx.xx/24
    gateway yy.yy.yy.yy
    bridge-ports enp1s0
    bridge-stp off
    bridge-fd 0
    hwaddress zz:zz:zz:zz:zz:zz

iface vmbr0 inet6 static
    address 2001:xxxx:xxxx:xxxx:xxxx:xxxx/64
    gateway 2001:yyyy:yyyy:yyff:ff:ff:ff:ff

auto vmbr1
iface vmbr1 inet static
    address 192.168.100.254/24
    bridge-ports dummy0
    bridge-stp off
    bridge-fd 0

iface vmbr1 inet6 static
    address fd42:dead:beef:64::fe/64

auto vmbr2
iface vmbr2 inet static
    address 192.168.200.254/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0

iface vmbr2 inet6 static
    address fd42:dead:babe:64::fe/64

Extract of the sysctl.conf:
Code:
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1
net.ipv6.conf.default.forwarding=1
net.ipv6.conf.all.proxy_ndp=1
net.ipv6.conf.default.proxy_ndp=1
net.ipv6.conf.all.accept_ra=2
net.ipv6.conf.default.accept_ra=2

If i ping the IPv4 address 192.168.100.254 on vmbr1, it works !
Code:
root@pve:~# ping 192.168.100.254
PING 192.168.100.254 (192.168.100.254) 56(84) bytes of data.
64 bytes from 192.168.100.254: icmp_seq=1 ttl=64 time=0.039 ms
64 bytes from 192.168.100.254: icmp_seq=2 ttl=64 time=0.054 ms

If i ping the IPv6 address fd42:dead:beef:64::fe on vmbr1, it works !
Code:
root@pve:~# ping6 fd42:dead:beef:64::fe
PING fd42:dead:beef:64::fe(fd42:dead:beef:64::fe) 56 data bytes
64 bytes from fd42:dead:beef:64::fe: icmp_seq=1 ttl=64 time=0.049 ms
64 bytes from fd42:dead:beef:64::fe: icmp_seq=2 ttl=64 time=0.054 ms

If i ping the IPv4 address 192.168.200.254 on vmbr2, it works !
Code:
root@pve:~# ping 192.168.200.254
PING 192.168.200.254 (192.168.200.254) 56(84) bytes of data.
64 bytes from 192.168.200.254: icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from 192.168.200.254: icmp_seq=2 ttl=64 time=0.053 ms

If i ping the IPv6 address fd42:dead:babe:64::fe on vmbr2, it fails !
Code:
root@pve:~# ping6 fd42:dead:babe:64::fe
PING fd42:dead:babe:64::fe(fd42:dead:babe:64::fe) 56 data bytes
From fd42:dead:beef:64::fe icmp_seq=1 Destination unreachable: Address unreachable
From fd42:dead:beef:64::fe icmp_seq=2 Destination unreachable: Address unreachable

If I do a ip link, you can note the bridge vmbr2 is considered NO-CARRIER, state DOWN:
Code:
root@pve:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
3: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
4: dummy0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
5: dummy1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
6: dummy2: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
8: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
9: vmbr2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff

Then, if en enslave the vmbr2 bridge on dummy1, it works !
Code:
root@pve:~# ping6 fd42:dead:babe:64::fe
PING fd42:dead:babe:64::fe(fd42:dead:babe:64::fe) 56 data bytes
64 bytes from fd42:dead:babe:64::fe: icmp_seq=1 ttl=64 time=0.047 ms
64 bytes from fd42:dead:babe:64::fe: icmp_seq=2 ttl=64 time=0.067 ms

The ip link shows now that the bridge is considered UP:
Code:
root@pve:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
3: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
4: dummy0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
5: dummy1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
6: dummy2: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
8: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
9: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff

Very strange behaviour in PVE7, isn't it ?! On PVE5 and PVE6 there is no such problem.
 
mmm,
can you try to put ipv6 +ipv4 in same vmbr2 ? (ifupdown2 support multiple "address ...")


Code:
auto vmbr2
iface vmbr2
    address 192.168.200.254/24
    address fd42:dead:babe:64::fe/64
    bridge-ports none
    bridge-stp off
    bridge-fd 0

seem to be related to vmbr2 link state down indeed.(maybe because of bridge_ports none).
I'm currently on holiday, but I'll try to look at this when I'll be back.

(ifupdown2 3.1 has been released last week, I don't known if this is fixing this problem or not)
 
This behaviour is the same. If you have juste one bridge (e.g vmbr1), you can have just an IPv6 or a dual stack IPv4+IPv6, it doesn't work. In all cases (one bridge, two bridges, xxx bridges, single stack, dual stack, ....) i have to create dummy interfaces and enslave the bridge in order to have a working local IPv6 ULA.

Try this on a fresh installation of PVE, you'll see. This is clearly because the bridge is considered DOWN.
 
Last edited:
Hi,
Did you ever solve this (without enslaving dummyX)? I do notice the same behavior, even on a public IP, not even ULA… IPv4 works, IPv6 doesn't. Most of the time (aha!).
On one machine I have:
ip l : 4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
and for ip a :
inet6 2001:41d0:yyyy:yyyy::1/64 scope global
On the other:
ip l : 4: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
ip a : inet6 2001:41d0:xxxx:xxxx::xxxx:1/64 scope global tentative

Not sure what is causing the first one to see it up, the other down… maybe it stays down until you actually put a vm in there…
 
Seem to be a change in new kernels (

I have see same report with systemd networkd
https://github.com/systemd/systemd/issues/9252
and dummy interface is the workaround




kernel 4.9 was working fine for example

"
root@anansi:~# ip link add bridge99 type bridge
root@anansi:~# ip link set bridge99 up
root@anansi:~# ip link show bridge99
3: bridge99: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/ether be:42:10:4a:d7:7a brd ff:ff:ff:ff:ff:ff
"
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!