[SOLVED] Proxmox VE - IPv6 Problems

cornholio21

New Member
Apr 18, 2011
15
0
1
Germany, Pliezhausen
www.khonych.in
Hello,
I need some help with my IPv6 configuration on my Proxmox VE 3.4 node.
Some time ago (somewhere middle 2014) the whole node was updates and since then IPv6 doesn't work on my guests.

I got OpenVZ containers and some KVM guests. Im using both.

Some information:
Due to restrictions of my datacenter, I have to use bridged network and assign every interface on a VM a static MAC address. These MAC addresses I have to give to my datacenter, so they can whitelist them. For the IPv6 Subnet I can only use one MAC address. So I'm not using venet in OpenVZ containers, I create a network device on every OpenVZ or KVM guest. IPv4 works fine with this configuration.
I only get a /64 IPv6 subnet from my datacenter.

So the goal is to get IPv6 connectivity for every guest on my node.

Let's look at the network configuration...


Host:
Code:
auto loiface lo inet loopback


iface eth0 inet manual


auto vmbr0
iface vmbr0 inet static
        address  123.123.55.172
        netmask  255.255.255.255
        gateway  123.123.55.161
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        pointopoint 123.123.55.161


iface vmbr0 inet6 static
        address 2a01:123:123:123::1
        netmask 64
        gateway fe80::1

Guest:
Code:
auto loiface lo inet loopback


auto eth0


iface eth0 inet static
        address 123.123.55.185
        netmask 255.255.255.224
        gateway 123.123.55.161


iface eth0 inet6 static
        address 2a01:123:123:123::100
        netmask 64
        gateway fe80::1

Code:
NETIF="ifname=eth0,bridge=vmbr0,mac=00:50:56:00:03:8D,host_ifname=veth100.0,host_mac=BA:16:83:2A:AC:9D"


What works:
-IPv6 on the Host. I can reach the internet on the host via IPv6 and also I can access it from outside.
-IPv4 host and guests

What does not work:
-I cannot ping from the guest to host:
# ping6 2a01:123:123:123::1
PING 2a01:123:123:123::1(2a01:123:123:123::1) 56 data bytes
From 2a01:123:123:123::100 icmp_seq=2 Destination unreachable: Address unreachable
From 2a01:123:123:123::100 icmp_seq=3 Destination unreachable: Address unreachable
From 2a01:123:123:123::100 icmp_seq=4 Destination unreachable: Address unreachable
Also it does not work reverse way.

-I cannot reach the internet via IPv6 from the guests.


Any idea how to bring my guests online via IPv6?

Thanks!


Regards,
Anton
 
Last edited:
Re: Proxmox VE - IPv6 Problems

So I got some updates...
I checked everything out with tcpdump and this is what I get:

When I start a ping6 running from my guest I can see the packets in wireshark... I can see the echo request and also the echo reply on the eth0 (on the host) interface from the remote machine!

So I think most of the configuration is ok, but somehow the reply packet is kind of ?lost? on the host machine and doesn't reach the guest?

Edit: I found some ICMPv6 errors on vmbr0. It says that the IP of the guest is not reachable.
On veth100.0 I can see only the request but not the reply.

Do I have to add a route to the guest?

Edit2:
Now I can ping the gateway from the guest:
Code:
/# ping6 -I eth0 fe80::1PING fe80::1(fe80::1) from fe80::250:56ff:fe00:38d eth0: 56 data bytes
64 bytes from fe80::1: icmp_seq=1 ttl=64 time=0.945 ms
64 bytes from fe80::1: icmp_seq=2 ttl=64 time=3.21 ms

But I still cannot go outside... The packets are in vmbr0 but not beeing routed to veth100.0.
 
Last edited:
Re: Proxmox VE - IPv6 Problems

Hi, thanks for a quick reply!

I installed the kernel and now i'm getting even more problems...
The host seems not to find the router:

Code:
#ip -6 neigh

fe80::1 dev vmbr0  FAILED

Code:
# ping6 -I vmbr0 fe80::1
PING fe80::1(fe80::1) from fe80::ca60:ff:febe:3a84 vmbr0: 56 data bytes
From fe80::ca60:ff:febe:3a84 icmp_seq=1 Destination unreachable: Address unreachable
From fe80::ca60:ff:febe:3a84 icmp_seq=2 Destination unreachable: Address unreachable

Code:
# ifconfig

eth0      Link encap:Ethernet  HWaddr c8:60:00:be:3a:84
          inet6 addr: fe80::ca60:ff:febe:3a84/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:180803 errors:0 dropped:0 overruns:0 frame:0
          TX packets:248762 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:36398725 (34.7 MiB)  TX bytes:52558196 (50.1 MiB)


lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:10553 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10553 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:6594371 (6.2 MiB)  TX bytes:6594371 (6.2 MiB)


tap200i0  Link encap:Ethernet  HWaddr 3a:9c:11:19:fc:5e
          inet6 addr: fe80::389c:11ff:fe19:fc5e/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:0 (0.0 B)  TX bytes:732 (732.0 B)


venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


veth100.0 Link encap:Ethernet  HWaddr ba:16:83:2a:ac:9d
          inet6 addr: fe80::b816:83ff:fe2a:ac9d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:219650 errors:0 dropped:0 overruns:0 frame:0
          TX packets:152978 errors:0 dropped:177 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:33181191 (31.6 MiB)  TX bytes:27823076 (26.5 MiB)


veth102.0 Link encap:Ethernet  HWaddr de:96:3a:c3:df:91
          inet6 addr: fe80::dc96:3aff:fec3:df91/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2399 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2309 errors:0 dropped:168 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:462262 (451.4 KiB)  TX bytes:296596 (289.6 KiB)


vmbr0     Link encap:Ethernet  HWaddr c8:60:00:be:3a:84
          inet addr:123.123.55.172  Bcast:123.123.55.172  Mask:255.255.255.255
          inet6 addr: 2a01:123:123:123::1/64 Scope:Global
          inet6 addr: fe80::ca60:ff:febe:3a84/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:25536 errors:0 dropped:0 overruns:0 frame:0
          TX packets:27683 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:5740049 (5.4 MiB)  TX bytes:15891253 (15.1 MiB)


vmbr1     Link encap:Ethernet  HWaddr 3a:9c:11:19:fc:5e
          inet addr:192.168.100.1  Bcast:192.168.100.255  Mask:255.255.255.0
          inet6 addr: fe80::c8ab:9bff:fe3a:7d1d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:21 errors:0 dropped:0 overruns:0 frame:0
          TX packets:90 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1855 (1.8 KiB)  TX bytes:7958 (7.7 KiB)


Code:
# ip -6 ro
2a01:123:123:123::/64 dev vmbr0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 0
fe80::1 dev venet0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 0
fe80::/64 dev vmbr0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 0
fe80::/64 dev eth0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 0
fe80::/64 dev venet0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 0
fe80::/64 dev veth100.0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 0
fe80::/64 dev veth102.0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 0
fe80::/64 dev vmbr1  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 0
fe80::/64 dev tap200i0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 0
default via fe80::1 dev vmbr0  metric 1024  mtu 1500 advmss 1440 hoplimit 0

Any suggestions?


Edit2:

I did some try and error and found out that I got a connection to the router after I ping this way:
Code:
ping6 -I eth0 fe80::1

After this I got following result:

Code:
 # ip -6 neigh
fe80::1 dev vmbr0  FAILED


# ping6 -I eth0 fe80::1
PING fe80::1(fe80::1) from fe80::ca60:ff:febe:3a84 eth0: 56 data bytes
From fe80::ca60:ff:febe:3a84 icmp_seq=2 Destination unreachable: Address unreachable
From fe80::ca60:ff:febe:3a84 icmp_seq=3 Destination unreachable: Address unreachable
From fe80::ca60:ff:febe:3a84 icmp_seq=4 Destination unreachable: Address unreachable

# ip -6 neigh
fe80::1 dev vmbr0 lladdr 78:fe:3d:46:e6:10 router DELAY

Can someone explain this behaviour? And can I use this trick to get my guests online? The guests are still not able to get an IPv6 connection BUT they can see the router finaly:
Guest:
Code:
ip -6 neigh
fe80::1 dev eth0 lladdr 78:fe:3d:46:e6:10 router STALE

On tcpdump captures I can see how echo replies come back from remote machines but stuck in vmbr0 and don't get to veth100.0.


Please test with latest kernel from pvetest:

# wget ftp://download1.proxmox.com/debian/...pve-kernel-2.6.32-37-pve_2.6.32-148_amd64.deb
# dpkg -i pve-kernel-2.6.32-37-pve_2.6.32-148_amd64.deb
 
Last edited:
Re: Proxmox VE - IPv6 Problems

So I have now manually to ping that gateway, after this the host is reachable via IPv6. So now, how I get my guests online? I created veth on my guests and I was thinking it would pass whole L2 traffic, why does this not work? The incoming traffic is stuck in vmbr0 and isn't switched to the guest over the veth100.0. The outgoing traffic is going outside and reaches remote machnines.

Anton
 
Re: Proxmox VE - IPv6 Problems

I believe it was the same issue. I use Hamachi for lots of VPN stuff. It would start fine for like 1-5 minutes then fail. Once I swtiched host/guest over to 3.10 kernel it was all good. But yes, no Containers, all VMs. If I switched Hamachi to use IP4 only, it worked fine, but that switched is not persistent and was just a shit solution anyway.
 
Re: Proxmox VE - IPv6 Problems

I'm running the current default 3.4 kernel and have been looking to add IPv6 support to all my VM and CT's but it seems this may be still too premature yet. It would be great it Proxmox could state the official position on IPv6 or point me to it if I'm missed it. The best advice I could fine was not in the official forums or wiki but here https://www.erawanarifnugroho.com/2013/08/12/configuring-ipv6-in-proxmox-ve3.html I haven't tried this yet but it appears you can get IPv6working even without proxmox GUI support for it?
 
Re: Proxmox VE - IPv6 Problems

I'm running the current default 3.4 kernel and have been looking to add IPv6 support to all my VM and CT's but it seems this may be still too premature yet. It would be great it Proxmox could state the official position on IPv6 or point me to it if I'm missed it. The best advice I could fine was not in the official forums or wiki but here https://www.erawanarifnugroho.com/2013/08/12/configuring-ipv6-in-proxmox-ve3.html I haven't tried this yet but it appears you can get IPv6working even without proxmox GUI support for it?

Accually if you run your CTs or VMs with the virtual network device (veth) you should be able to use IPv6 without any configuration (except sysctl ipv6 setup), but you see, some people like me have this problem with layer 2 switching...
 
Re: Proxmox VE - IPv6 Problems

Accually if you run your CTs or VMs with the virtual network device (veth) you should be able to use IPv6 without any configuration (except sysctl ipv6 setup), but you see, some people like me have this problem with layer 2 switching...

Thanks cornholio21, I appreciate your reply, I don't want to hijack your thread though so let me know if you think its best I start a new one.

In my situation I provide clients with their own VPS and its recommend that each VPS uses a /64. So in my situation I need to manually configure (since their is no ipv6 proxmox GUI support yet) a separate /64 for the host machine, then each CT will be able to route IP6 through it?
 
Re: Proxmox VE - IPv6 Problems

Thanks cornholio21, I appreciate your reply, I don't want to hijack your thread though so let me know if you think its best I start a new one.

In my situation I provide clients with their own VPS and its recommend that each VPS uses a /64. So in my situation I need to manually configure (since their is no ipv6 proxmox GUI support yet) a separate /64 for the host machine, then each CT will be able to route IP6 through it?

If you use OpenVZ with venet device try this:
http://robert.penz.name/582/ipv6-openvz-ves-and-debianproxmox-as-host-system/

I am accually still searching for a solution in my case...

UPDATE:
Somehow my datacenters gateway is a little bit buggy. I could now estabilish an IPv6 connection on the host with following trick:
Code:
#ip -6 neigh add fe80::1 lladdr 78:fe:3d:46:e6:10 dev vmbr0

I tried to do the same on my VPS guest, but I got still a problem, where IPv6 packets aren't switched to the VPS and are dropped at vmbr0. After some researching I found this in the capture:

Code:
[FONT=arial]0.000000      2a01:4f8:XXXX:XXXX::100  2a02:2e0:3fe:XXXX:XXXX:: ICMPv6  118     Echo (ping) request[/FONT][FONT=arial] id=0x0964, seq=1, hop limit=64 (reply in 2)[/FONT]
[FONT=arial]Ethernet II, Src: [B]Vmware_00:03:8d (00:50:56:00:03:8d) <<< (VM's MAC Address) [/B], Dst: JuniperN_46:e6:10[/FONT][FONT=arial] (78:fe:3d:46:e6:10)[/FONT]

[FONT=arial]0.005172      2a02:2e0:3fe:XXXX:XXXX:: 2a01:4f8:XXXX:XXXX::100  ICMPv6  118     Echo (ping) reply[/FONT][FONT=arial] id=0x0964, seq=1, hop limit=56 (request in 1)[/FONT]
[FONT=arial]Ethernet II, Src: JuniperN_46:e6:10 (78:fe:3d:46:e6:10), Dst: [B]AsustekC_be:3a:84[/B][/FONT][B][FONT=arial] (c8:60:00:be:3a:84) <<< (Host's MAC Address)[/FONT][/B]

So I figured out my datacenter is allowing one MAC for the /64 Subnet. Any idea how I could solve it?
 
Last edited:
Re: Proxmox VE - IPv6 Problems

Well, I got my first VPS online via IPv6 now, but with a very uncomfortable way...
Maybe someone want to use this also to fix the probleme.

First I edited my /etc/network/interfaces file on the host:
1. Add this line: "iface lo inet6 loopback" (I have read in some forums that this should do the trick with the NDP problem. It accually doesn't help me ony my case but you should try it - maybe it will work for you...)
2. My vmbr0 configuration now looks like this:
Code:
auto vmbr0
iface vmbr0 inet static
        address  5.xxx.xxx.172
        netmask  255.255.255.224
        gateway  5.xxx.xxx.161
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        pointopoint 5.xxx.xxx.161


iface vmbr0 inet6 static
        address 2a01:xxxx:xxxx:xxxx::1
        netmask 64
        gateway fe80::1
        [B]post-up sysctl -p
          post-up ip -6 neigh replace fe80::1 dev vmbr0 lladdr 78:fe:3d:46:e6:10
          post-up ip -6 neigh replace 2a01:xxxx:xxxx:xxxx::100 dev vmbr0 lladdr 00:50:56:00:03:8d[/B]

My main problem is NDP... As you can see I did set up manually the IPv6 neighbours (it's like ARP in IPv4).
The first line sets an entry for the gateway and the second sets the VPS entry.
Sysctl -p is for routing, see next point.

Next, I enabled IPv6 routing (can't bridge due to my datacenters restrictions), I edited the /etc/sysctl.conf:
Add this line: "net.ipv6.conf.all.forwarding = 1"

Then I had to edit VPS's /etc/network/interfaces file. On the VPS also add the line "iface lo inet6 loopback"
Code:
iface eth0 inet static
        address 5.xxx.xxx.185
        netmask 255.255.255.224
        gateway 5.xxx.xxx.161


iface eth0 inet6 static
        address 2a01:xxxx:xxxx:xxxx::100
        netmask 64
        gateway 2a01:xxxx:xxxx:xxxx::1
        [B]post-up ip -6 neigh replace 2a01:xxxx:xxxx:xxxx::1 dev eth0 lladdr c8:60:00:be:3a:84[/B]

After this I rebooted and my host and VPS were online via IPv6.

Not comfortable, but it's working :)


UPDATE: I am digging depper and I think I found a bug in the br_multicast kernel implementation... I will try to compile my own kernel with a fix and try this out... Stay tuned!
 
Last edited:
Re: Proxmox VE - IPv6 Problems

Hello,
I finally tested the new kernel build with a patch and it works perfectly now. The NDP packets are now switched and all VMs are able to comunicate via IPv6.
The only problem is still that I can't get NDP packets from my gateway, but I think it's some kind of restriction from Hetzner datacenter...

Here is the promissed solution:
http://patchwork.ozlabs.org/patch/326048/

While I was searching for someone who had the same problem, I found this thread and used the patch in the pve-kernel.

I think this fix should be used in all pve-kernel builds on proxmox.

The patch is here:

Code:
[COLOR=#2E8B57][B]diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c[/B][/COLOR]
[COLOR=#2E8B57][B]index ef66365..8ccc0bf 100644[/B][/COLOR]
[COLOR=#2E8B57][B]--- a/net/bridge/br_multicast.c[/B][/COLOR]
[COLOR=#2E8B57][B]+++ b/net/bridge/br_multicast.c[/B][/COLOR]
[COLOR=#A52A2A][B]@@ -1562,8 +1562,8 @@[/B][/COLOR] [COLOR=#A020F0] static int br_multicast_ipv6_rcv(struct net_bridge *br,[/COLOR]
                return 0;

        /* Prevent flooding this packet if there is no listener present */
[COLOR=#6A5ACD]-       if (!ipv6_addr_is_ll_all_nodes(&ip6h->daddr))[/COLOR]
[COLOR=#6A5ACD]-               BR_INPUT_SKB_CB(skb)->mrouters_only = 1;[/COLOR]
[COLOR=#008B8B]+/*     if (!ipv6_addr_is_ll_all_nodes(&ip6h->daddr))[/COLOR]
[COLOR=#008B8B]+               BR_INPUT_SKB_CB(skb)->mrouters_only = 1;*/[/COLOR]

        if (ip6h->nexthdr != IPPROTO_HOPOPTS ||
            ip6h->payload_len == 0)

I hope I could help you people out there... ;)

Please mark as SOLVED


Regards,
Anton
 
Last edited:
Re: Proxmox VE - IPv6 Problems

Hi dietmar,
I tested it with 2.6.32-37-pve (the newest git repo) on KVM and on OpenVZ. (Fresh install)

I could test this for you on 3.10.0 if you want :)

Regards,
Anton


Edit: Even on Windows Server IPv6 is now working correctly.
 
Last edited:
Re: Proxmox VE - IPv6 Problems

Hi,

I also observed strange IPv6 problems since my upgrade to the latest proxmox release. I had problems with IPv6 connectivity inside my VMs and also from the host-system itself. I just installed the provided 2.6 kernel package and the problems seem to be solved. Thank you for taking care!

best regards

Daniel
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!