IPv6 bridged networking to KVM

hacman

Renowned Member
Oct 11, 2013
89
8
73
Newcastle upon Tyne, UK
Hello all,

I'm having an issue with our setup that I just can't seem to lick. Having gone around in circles, I'm wondering if anyone here has any ideas, as it's probably something simple I'm missing.

We have multiple hosts, each with 6 NICs in. The NICs are grouped into two bonds, and each bond goes into a bridge. Bridge 0 (vmbr0) is on our internal network, and has an IP assigned - the management interface. Bridge 1 (vmbr1) is on our external network - this bridge has no IP assigned and just acts as a "dumb" switch - passing traffic from our VMs to our physical switches.

IPv4 works perfectly. But IPv6 will not work on vmbr1. VMs can talk to eachother, but not outside of the host.

I've included our configs below:

bond0 Link encap:Ethernet HWaddr 00:1a:4b:ae:38:fc
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:546943610 errors:0 dropped:528420 overruns:0 frame:0
TX packets:611294590 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:491123663819 (457.3 GiB) TX bytes:571758561831 (532.4 GiB)

bond1 Link encap:Ethernet HWaddr 00:1b:78:ce:5b:6c
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:7815730 errors:0 dropped:251057 overruns:0 frame:0
TX packets:5876170 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4473613505 (4.1 GiB) TX bytes:724466417 (690.9 MiB)

eth0 Link encap:Ethernet HWaddr 00:1f:29:5b:5d:86
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:167364 errors:0 dropped:167364 overruns:0 frame:0
TX packets:876402 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:12713691 (12.1 MiB) TX bytes:56089728 (53.4 MiB)

eth1 Link encap:Ethernet HWaddr 00:1a:4b:ae:38:fe
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:19105350 errors:0 dropped:167371 overruns:0 frame:0
TX packets:331851123 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:27646536328 (25.7 GiB) TX bytes:324839342638 (302.5 GiB)

eth2 Link encap:Ethernet HWaddr 00:1a:4b:ae:38:fc
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:525618160 errors:0 dropped:0 overruns:0 frame:0
TX packets:139762357 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:461696547950 (429.9 GiB) TX bytes:125263292631 (116.6 GiB)
Interrupt:17 Memory:f9fe0000-fa000000

eth3 Link encap:Ethernet HWaddr 00:1f:29:5b:5d:87
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:2052736 errors:0 dropped:193685 overruns:0 frame:0
TX packets:138804708 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1767865850 (1.6 GiB) TX bytes:121599836834 (113.2 GiB)
Interrupt:16 Memory:f9fa0000-f9fc0000

eth4 Link encap:Ethernet HWaddr 00:1b:78:ce:5b:6c
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:7564673 errors:0 dropped:0 overruns:0 frame:0
TX packets:3005229 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4455984011 (4.1 GiB) TX bytes:369727466 (352.5 MiB)

eth5 Link encap:Ethernet HWaddr 00:1b:78:ce:5b:56
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:251057 errors:0 dropped:251057 overruns:0 frame:0
TX packets:2870941 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:17629494 (16.8 MiB) TX bytes:354738951 (338.3 MiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:93672137 errors:0 dropped:0 overruns:0 frame:0
TX packets:93672137 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:299740455463 (279.1 GiB) TX bytes:299740455463 (279.1 GiB)

vmbr0 Link encap:Ethernet HWaddr 00:1a:4b:ae:38:fc
inet addr:10.2.2.3 Bcast:10.2.255.255 Mask:255.255.0.0
inet6 addr: fe80::21a:4bff:feae:38fc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:282873143 errors:0 dropped:28 overruns:0 frame:0
TX packets:269198619 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:452557436245 (421.4 GiB) TX bytes:491448916817 (457.6 GiB)

vmbr1 Link encap:Ethernet HWaddr 00:1b:78:ce:5b:6c
inet6 addr: fe80::21b:78ff:fece:5b6c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1240581 errors:0 dropped:148 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:72383426 (69.0 MiB) TX bytes:888 (888.0 B)

Kernel IPv6 routing table
Destination Next Hop Flag Met Ref Use If
::1/128 :: U 256 0 0 lo
2a02:5300:1:2::1/128 :: U 1024 0 1 vmbr1
2a02:5300:1:2::/64 :: U 1024 0 1 vmbr1
fe80::/64 :: U 256 0 0 vmbr0
fe80::/64 :: U 256 0 0 vmbr1
::/0 :: U 1 8 76 vmbr1
::/0 :: U 1024 0 0 vmbr1
::/0 :: !n -1 1 95 lo
::1/128 :: Un 0 9 15371 lo
fe80::/128 :: Un 0 1 0 lo
fe80::/128 :: Un 0 1 0 lo
fe80::21a:4bff:feae:38fc/128 :: Un 0 1 0 lo
fe80::21b:78ff:fece:5b6c/128 :: Un 0 1 0 lo
ff00::/8 :: U 256 8 52497 vmbr0
ff00::/8 :: U 256 8533826 vmbr1
::/0 :: !n -1 1 95 lo

net.ipv6.anycast_src_echo_reply = 0
net.ipv6.auto_flowlabels = 1
net.ipv6.bindv6only = 0
net.ipv6.conf.all.accept_dad = 1
net.ipv6.conf.all.accept_ra = 1
net.ipv6.conf.all.accept_ra_defrtr = 1
net.ipv6.conf.all.accept_ra_from_local = 0
net.ipv6.conf.all.accept_ra_min_hop_limit = 1
net.ipv6.conf.all.accept_ra_mtu = 1
net.ipv6.conf.all.accept_ra_pinfo = 1
net.ipv6.conf.all.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.all.accept_ra_rtr_pref = 1
net.ipv6.conf.all.accept_redirects = 1
net.ipv6.conf.all.accept_source_route = 0
net.ipv6.conf.all.autoconf = 1
net.ipv6.conf.all.dad_transmits = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.force_mld_version = 0
net.ipv6.conf.all.force_tllao = 0
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.all.hop_limit = 64
net.ipv6.conf.all.ignore_routes_with_linkdown = 0
net.ipv6.conf.all.max_addresses = 16
net.ipv6.conf.all.max_desync_factor = 600
net.ipv6.conf.all.mc_forwarding = 0
net.ipv6.conf.all.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.all.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.all.mtu = 1280
net.ipv6.conf.all.ndisc_notify = 0
net.ipv6.conf.all.proxy_ndp = 0
net.ipv6.conf.all.regen_max_retry = 3
net.ipv6.conf.all.router_probe_interval = 60
net.ipv6.conf.all.router_solicitation_delay = 1
net.ipv6.conf.all.router_solicitation_interval = 4
net.ipv6.conf.all.router_solicitations = 3
sysctl: reading key "net.ipv6.conf.all.stable_secret"
net.ipv6.conf.all.suppress_frag_ndisc = 1
net.ipv6.conf.all.temp_prefered_lft = 86400
net.ipv6.conf.all.temp_valid_lft = 604800
net.ipv6.conf.all.use_oif_addrs_only = 0
net.ipv6.conf.all.use_tempaddr = 0

proxmox-ve: 4.3-71 (running kernel: 4.4.21-1-pve)
pve-manager: 4.3-10 (running version: 4.3-10/7230e60f)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.13-1-pve: 4.4.13-56
pve-kernel-4.4.13-2-pve: 4.4.13-58
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.15-1-pve: 4.4.15-60
pve-kernel-4.4.16-1-pve: 4.4.16-64
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.10-1-pve: 4.4.10-54
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-47
qemu-server: 4.0-94
pve-firmware: 1.1-10
libpve-common-perl: 4.0-80
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-68
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.3-14
pve-qemu-kvm: 2.7.0-8
pve-container: 1.0-81
pve-firewall: 2.0-31
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.5-1
lxcfs: 2.0.4-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
ceph: 0.94.9-1~bpo80+1

Any ideas or help anyone can offer are much appreciated, as this has really got me stumped.

Thanks!

Jon
 
Hi all,

So it seems with testing that the issue is just with routing outside the cluster. The VMs can be on different nodes and talk to eachother over IPv6, but not to our gateway.

Very odd!
 
With World IPv6 Day last week there have been a few people trying to setup IPv6 connectivity for their virtual guests with libvirt and KVM. For those whose guests are using bridged networking to their LAN, there is really not much to say on the topic. If your LAN has IPv6 enabled and your virtualization host is getting a IPv6 address, then your guests can get IPv6 addresses in exactly the same manner, since they appear directly on the LAN. For those who are using routed / NATed networking with KVM, via the “default” virtual network libvirt creates out of the box, there is a little more work todo. That is what this blog posting will attempt to illustrate.
 
There's a bunch of things that could go wrong there. Wrong routes, firewall (blocked neighbor discovery), wrong mac addresses on a server where the hosting provider requires them to be allocated, ...

Some more information would be useful: ie. which hosting provider you're using (if any), what routing information you got from them (gateway, prefix length), whether they require you to use specific mac addresses (eg. OVH), how the hosts are connected.

Also, you're saying VMs can talk to each other, but not outside the host - does that mean your problem isn't just with talking to the WAN but also with talking to VMs of across hosts in the same LAN?
If so, then please also try pinging the link-local addresses (fe80:...) via two VMs across hosts.

What exactly do you mean by "bridged, but on the WAN side"?

Do you have any special settings? (ip rule, ip neighbor, tc/qdiscs, ip6tables, nftables).

Is the proxmox firewall enabled on datacenter / host / VM / VM-NIC levels?

Did you try disabling the firewall everywhere (host + guests)? If that's not an option, an overview over all the involved components would be useful. Particularly via the following commands on two of the hosts while VMs are up and running on both of them, and from inside one VM on each host:
Code:
# ip addr
# ip -6 route
# ip rule
# ip neighbor
# ip6tables-save
# nft list ruleset
# tc qdisc show
 
Hi,

Thanks for the reply - I'll try provide answers to questions here, and output of the commands in a followup.
  • We have 3 hosts that are co-located. Each has 6 NICs, in two bonds, that feed two bridges. We have a LAN bridge (vmbr0 - PMVE GUI, etc) and a WAN bridge (vmbr1). It is the WAN bridge that all the VMs are connected to.
  • All NICs go into our own switches - the network provider then give us a set of drops into those. Other non PMVE devices on those switches work with IPv6 fine, but can't communicate with the PMVE VMs, or the other way round.
  • The upstream provider supply us with a /64 of IPv6 space, and list the gateway as being :1 in that range. This is not tied to MAC addresses or anything like that.
  • Proxmox firewall is not enabled at any level.
  • The VMs can communicte via IPv6 with eachother, even if on different hosts.
  • There are no special settings that we have set at this point.
Once again - any help or ideas very much appreciated.

Jon
 
Hi,

Please find output from the nodes below:

root@node-1:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond1 state DOWN group default qlen 1000
link/ether 00:1f:29:60:f3:7f brd ff:ff:ff:ff:ff:ff
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond1 state DOWN group default qlen 1000
link/ether 00:1f:29:60:f3:7c brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 00:19:bb:32:f0:e8 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond1 state UP group default qlen 1000
link/ether 00:1f:29:60:f3:7d brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond1 state UP group default qlen 1000
link/ether 00:1f:29:60:f3:7e brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 00:19:bb:32:f0:e6 brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 00:19:bb:32:f0:e8 brd ff:ff:ff:ff:ff:ff
9: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
link/ether 00:1f:29:60:f3:7d brd ff:ff:ff:ff:ff:ff
10: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
[LAN INFO REDACTED]
11: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:1f:29:60:f3:7d brd ff:ff:ff:ff:ff:ff
inet6 fe80::21f:29ff:fe60:f37d/64 scope link
valid_lft forever preferred_lft forever

root@node-1:~# ip -6 route
fe80::/64 dev vmbr0 proto kernel metric 256 pref medium
fe80::/64 dev vmbr1 proto kernel metric 256 pref medium
fe80::/64 dev vmbr2 proto kernel metric 256 pref medium

root@node-1:~# ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default

root@node-1:~# ip neighbor
[LAN INFO REDACTED]
fe80::21d:71ff:fe72:3dc0 dev vmbr1 lladdr 00:1d:71:72:3d:c0 router STALE

root@node-1:~# ip6tables-save
# Generated by ip6tables-save v1.4.21 on Tue Dec 13 17:02:27 2016
*filter
:INPUT ACCEPT [66:5104]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
# Completed on Tue Dec 13 17:02:27 2016

root@node-1:~# tc qdisc show
qdisc noqueue 0: dev lo root refcnt 2
qdisc pfifo_fast 0: dev eth0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth1 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc mq 0: dev eth2 root
qdisc pfifo_fast 0: dev eth2 parent :1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth3 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth4 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc mq 0: dev eth5 root
qdisc pfifo_fast 0: dev eth5 parent :1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc noqueue 0: dev bond0 root refcnt 2
qdisc noqueue 0: dev bond1 root refcnt 2
qdisc noqueue 0: dev vmbr0 root refcnt 2
qdisc noqueue 0: dev vmbr1 root refcnt 2
qdisc noqueue 0: dev vmbr2 root refcnt 2
qdisc pfifo_fast 0: dev tap302i0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev tap201i0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev tap900i0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1

root@node-5:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
link/ether 00:19:bb:cd:ba:94 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
link/ether 00:19:bb:cd:ba:96 brd ff:ff:ff:ff:ff:ff
4: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
link/ether 00:1b:78:ce:c6:cc brd ff:ff:ff:ff:ff:ff
5: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
link/ether 00:26:55:dc:fc:f1 brd ff:ff:ff:ff:ff:ff
6: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 00:26:55:dc:fc:f0 brd ff:ff:ff:ff:ff:ff
7: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 00:1b:78:ce:c6:c2 brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 00:1b:78:ce:c6:cc brd ff:ff:ff:ff:ff:ff
9: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
link/ether 00:19:bb:cd:ba:94 brd ff:ff:ff:ff:ff:ff
10: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
[LAN INFO REDACTED]
11: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:19:bb:cd:ba:94 brd ff:ff:ff:ff:ff:ff
inet6 fe80::219:bbff:fecd:ba94/64 scope link
valid_lft forever preferred_lft forever

root@node-5:~# ip -6 route
5a05:5300:1:2::1 dev vmbr1 metric 1024 pref medium
fe80::/64 dev vmbr0 proto kernel metric 256 pref medium
fe80::/64 dev vmbr1 proto kernel metric 256 pref medium
default via 5a05:5300:1:2::1 dev vmbr1 metric 1024 pref medium

root@node-5:~# ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default

root@node-5:~# ip neighbor
[LAN INFO REDACTED]
fe80::21d:71ff:fe72:3dc0 dev vmbr1 lladdr 00:1d:71:72:3d:c0 router STALE

root@node-5:~# ip6tables-save
# Generated by ip6tables-save v1.4.21 on Tue Dec 13 17:03:56 2016
*filter
:INPUT ACCEPT [63:4792]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
# Completed on Tue Dec 13 17:03:56 2016

root@node-5:~# tc qdisc show
qdisc noqueue 0: dev lo root refcnt 2
qdisc mq 0: dev eth0 root
qdisc pfifo_fast 0: dev eth0 parent :1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc mq 0: dev eth1 root
qdisc pfifo_fast 0: dev eth1 parent :1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth4 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth5 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc mq 0: dev eth2 root
qdisc pfifo_fast 0: dev eth2 parent :1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc mq 0: dev eth3 root
qdisc pfifo_fast 0: dev eth3 parent :1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc noqueue 0: dev bond0 root refcnt 2
qdisc noqueue 0: dev bond1 root refcnt 2
qdisc noqueue 0: dev vmbr0 root refcnt 2
qdisc noqueue 0: dev vmbr1 root refcnt 2
qdisc pfifo_fast 0: dev tap104i0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev tap301i0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev tap100i0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev tap101i0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev tap901i0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev tap103i0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
root@node-5:~#

Thanks,

Jon
 
Hi,

Please find the output from the VMs:

[root@VM-1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 12:cc:7e:8c:4b:d3 brd ff:ff:ff:ff:ff:ff
inet 88.96.38.24/28 brd 88.96.38.31 scope global eth0
valid_lft forever preferred_lft forever
inet6 5a05:5300:1:2:10cc:7eff:fe8c:4bd3/64 scope global noprefixroute dynamic
valid_lft 2591896sec preferred_lft 604696sec
inet6 5a05:5300:1:2::2/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::10cc:7eff:fe8c:4bd3/64 scope link
valid_lft forever preferred_lft forever

[root@VM-1 ~]# ip -6 route
unreachable ::/96 dev lo metric 1024 error -101
unreachable ::ffff:0.0.0.0/96 dev lo metric 1024 error -101
unreachable 2002:a00::/24 dev lo metric 1024 error -101
unreachable 2002:7f00::/24 dev lo metric 1024 error -101
unreachable 2002:a9fe::/32 dev lo metric 1024 error -101
unreachable 2002:ac10::/28 dev lo metric 1024 error -101
unreachable 2002:c0a8::/32 dev lo metric 1024 error -101
unreachable 2002:e000::/19 dev lo metric 1024 error -101
2a00:fd80:aaaa:ffff::eeee:ff1 via 5a05:5300:1:2::1 dev eth0 metric 0
cache
5a05:5300:1:2::/64 dev eth0 proto kernel metric 256
unreachable 3ffe:ffff::/32 dev lo metric 1024 error -101
fe80::/64 dev eth0 proto kernel metric 256
default via 5a05:5300:1:2::1 dev eth0 proto static metric 100

[root@VM-1 ~]# ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default

[root@VM-1 ~]# ip neighbor
5a05:5300:1:2::1 dev eth0 FAILED
fe80::21d:71ff:fe72:3dc0 dev eth0 lladdr 00:1d:71:72:3d:c0 router REACHABLE
88.96.38.17 dev eth0 lladdr 00:1d:71:72:3d:c0 REACHABLE
88.96.38.26 dev eth0 lladdr ca:ec:d9:1e:4d:48 STALE
88.96.38.30 dev eth0 lladdr 00:0c:29:6c:9c:2d STALE

[root@VM-1 ~]# ip6tables-save

[root@VM-1 ~]# nft list ruleset
-bash: nft: command not found

[root@VM-1 ~]# tc qdisc show
qdisc pfifo_fast 0: dev eth0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
[root@VM-1 ~]#

[root@VM-5 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 9e:04:da:5e:9f:fc brd ff:ff:ff:ff:ff:ff
inet 88.96.38.25/28 brd 88.96.38.31 scope global eth0
valid_lft forever preferred_lft forever
inet6 5a05:5300:1:2::3/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::9c04:daff:fe5e:9ffc/64 scope link
valid_lft forever preferred_lft forever

[root@VM-5 ~]# ip -6 route
unreachable ::/96 dev lo metric 1024 error -101
unreachable ::ffff:0.0.0.0/96 dev lo metric 1024 error -101
unreachable 2002:a00::/24 dev lo metric 1024 error -101
unreachable 2002:7f00::/24 dev lo metric 1024 error -101
unreachable 2002:a9fe::/32 dev lo metric 1024 error -101
unreachable 2002:ac10::/28 dev lo metric 1024 error -101
unreachable 2002:c0a8::/32 dev lo metric 1024 error -101
unreachable 2002:e000::/19 dev lo metric 1024 error -101
2a01:7e00::f03c:91ff:fe93:e774 via 5a05:5300:1:2::1 dev eth0 metric 0
cache
5a05:5300:1:2::/64 dev eth0 proto kernel metric 256
unreachable 3ffe:ffff::/32 dev lo metric 1024 error -101
fe80::/64 dev eth0 proto kernel metric 256
default via 5a05:5300:1:2::1 dev eth0 proto static metric 100

[root@VM-5 ~]# ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default

[root@VM-5 ~]# ip neighbor
5a05:5300:1:2::1 dev eth0 FAILED
fe80::21d:71ff:fe72:3dc0 dev eth0 lladdr 00:1d:71:72:3d:c0 router REACHABLE
88.96.38.30 dev eth0 lladdr 00:0c:29:6c:9c:2d STALE
88.96.38.17 dev eth0 lladdr 00:1d:71:72:3d:c0 REACHABLE

[root@VM-5 ~]# ip6tables-save

[root@VM-5 ~]# nft list ruleset
-bash: nft: command not found

[root@VM-5 ~]# tc qdisc show
qdisc pfifo_fast 0: dev eth0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
[root@VM-5 ~]#

Thanks,

Jon
 
What surprises me is that you said the VMs can communicate across hosts successfully, just not with the other machines in the network.
Do they use the same prefix IPv6 address prefix? You could try pinging one of their link-local addresses (fe80::...) rather than the global one.
In any case, you should monitor the ping attempts with tcpdump on the host's bridge and possibly eth&tap devices, see how far which packets go. (eg. `# tcpdump -vni vmbr1 icmp6`)

Additionally, the VM on node 1 seems to have auto-configured, so it must have gotten a router advertisement packet at some point. Is that something you setup yourself? I wonder if the machines you cannot talk to use a static or auto-configured IPv6 setup.
 
Hi,

Thanks for reviewing that - good to see there are no glaring errors!

The auto-assigned IP comes from the upstream provider - we got them to enable this temporarily to assist with diagnostics.

All systems are in the same prefix/subnet - the cross host communication is what has me really stumped on this too!

I'll run some TCPDump and such and see what I find.

Thanks,

Jon
 
OK,

So TCPDump seems to suggest that the VMs are constantly trying to do discovery, to find out who has the router address. This would suggest that despite the auto-config having happened at somepoint, the RA packets are not arriving.

I'm checking on our TOR switches, to ensure no strangeness is going on there, but this shouldn't be the issue as there are other systems there that have no issues. Better to rule it out though...

Jon
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!