[SOLVED] OPNsense/PFsense IPv6 and VIPs - VIPs not routable - OVH network

FingerlessGloves

Well-Known Member
Oct 22, 2019
32
5
48
*Note*
See post #3 for how I solve the issue

Hi Guys,

I've got this really strange problem. When add my IPv6 address to my WAN interface on OPNsense (basically the same thing as PFsense). The IPv6 address works I can ping to the internet, all is good. If I add a VIP to it, that VIP is unable to route to the internet. Tried /128 prefix and /64 prefix.

I've tested this by doing ping6 -S 2001:41d0:800:xxxx::2 google.com

Strange thing is... if I get someone or a website to ping that IP address that doesn't work, Boom! It starts working, for a little while. Until there's no traffic happening for that IP for a while.

I can see the traffic leaving the firewall, on the logs. So in my eyes there's some issue with the networking layer of Proxmox? Setting I need to set or?

Any ideas?


INFO
Virtual Environment 6.0-9
Hosted on SoYouStart Dedicated Server.

Tried Linux Bridge and Open vSwitch, same issue on both.
LAN port is bridged to vmbr0 with no IPv6 configured on the host, just one IPv4 address for management of Proxmox.
No Proxmox Firewalls used.


I have a friend who's using ESXI on SoYouStart, and having the same setup in OPNsense, he does not get an issue.

Any help would be most appreciated.
Jonny

EDIT: Now installed PFsense to test, and I get the same problem.
 
Last edited:
So I've created a PFsense and a OPNsense VM on the same OVS bridge.

Checking NDP table on both sides, the VIPs aren't there. So something is going on with NDP.

From the looks of it NDP, isn't crossing the bridge on Proxmox.

Added the following in to sysctl, no luck.

net.ipv6.conf.all.autoconf = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.vmbr0.autoconf = 0
net.ipv6.conf.all.accept_ra = 0
net.ipv6.conf.default.accept_ra = 0
net.ipv6.conf.vmbr0.accept_ra = 0
net.ipv6.conf.vmbr0.accept_ra = 0
net.ipv6.conf.vmbr0.autoconf = 0

net.ipv6.conf.default.proxy_ndp=1
net.ipv6.conf.all.proxy_ndp=1
net.ipv6.conf.default.forwarding=1
net.ipv6.conf.all.forwarding = 1
 
Last edited:
I've found how to resolve the issue!

I've been looking at this the wrong way, I think due to how the Linux Bridge works and OVH's network, you have to set the VM's (OPNsense's) gateway to the host's (bridge's IPv6 address). Firstly we'll need to set that up.

First add a IPv6 Address to your bridge as per the documentation on OVH docs (https://docs.ovh.com/gb/en/dedicated/network-ipv6/) Mine looks like this, sample of my inet6 part
Code:
iface vmbr0 inet6 static
    address 2001:41d0:6960:85f::1
    netmask 64
    post-up /sbin/ip -f inet6 route add 2001:41d0:6960:8ff:ff:ff:ff:ff dev vmbr0
    post-up /sbin/ip -f inet6 route add default via 2001:41d0:6960:8ff:ff:ff:ff:ff
    pre-down /sbin/ip -f inet6 route del default via 2001:41d0:6960:8ff:ff:ff:ff:ff
    pre-down /sbin/ip -f inet6 route del 2001:41d0:6960:8ff:ff:ff:ff:ff dev vmbr0

Run the following command, so IPv6 traffic will get forwarded by the host
sysctl -w net.ipv6.conf.all.forwarding=1
edit /etc/sysctl.conf and then make the same change in there, to enable net.ipv6.conf.all.forwarding, so that it keeps this setting between reboots.


Then create your OPNsense (or PFsense) VM.
Once installed set your WAN interface's IPv6 Address to one you would like say 2001:41d0:6960:85f:1000::1, then add a gateway instead of putting the gateway 2001:41d0:6960:8ff:ff:ff:ff:ff, you put the the IPv6 address of the linux bridge 2001:41d0:6960:85f::1.

So OPNsense (or PFsense) IPv6 should look like this
IPv6: 2001:41d0:6960:85f:1000::1/64
Gateway: 2001:41d0:6960:85f::1

Now you can add the VIPs you would like and now they will be able to route to the internet.
2001:41d0:6960:85f:1000::69/64
2001:41d0:6960:85f:1000::70/64
2001:41d0:6960:85f:1000::71/64

You can test these work by going in to the console of OPNsense (or PFsense) and doing the following command, which will ping Cloudflare's IPv6 DNS server, from your chosen IPv6 address
ping -S 2001:41d0:6960:85f:1000::69 2606:4700:4700::1111

I hope this post helps someone else on Kimsufi(KS), SoYouStart(SYS), OVH. Who's using Proxmox as their hypervisor and wants to use OPNsense (or PFsense) as their Firewall with IPv6 support.

EDIT: Just setup another SoYouStart server by doing it like this, works a treat!
 
Last edited:
I've found how to resolve the issue!

I've been looking at this the wrong way, I think due to how the Linux Bridge works and OVH's network, you have to set the VM's (OPNsense's) gateway to the host's (bridge's IPv6 address). Firstly we'll need to set that up.

First add a IPv6 Address to your bridge as per the documentation on OVH docs (https://docs.ovh.com/gb/en/dedicated/network-ipv6/) Mine looks like this, sample of my inet6 part
Code:
iface vmbr0 inet6 static
    address 2001:41d0:6960:85f::1
    netmask 64
    post-up /sbin/ip -f inet6 route add 2001:41d0:6960:8ff:ff:ff:ff:ff dev vmbr0
    post-up /sbin/ip -f inet6 route add default via 2001:41d0:6960:8ff:ff:ff:ff:ff
    pre-down /sbin/ip -f inet6 route del default via 2001:41d0:6960:8ff:ff:ff:ff:ff
    pre-down /sbin/ip -f inet6 route del 2001:41d0:6960:8ff:ff:ff:ff:ff dev vmbr0

Run the following command, so IPv6 traffic will get forwarded by the host
sysctl -w net.ipv6.conf.all.forwarding=1
edit /etc/sysctl.conf and then make the same change in there, to enable net.ipv6.conf.all.forwarding, so that it keeps this setting between reboots.


Then create your OPNsense (or PFsense) VM.
Once installed set your WAN interface's IPv6 Address to one you would like say 2001:41d0:6960:85f:1000::1, then add a gateway instead of putting the gateway 2001:41d0:6960:8ff:ff:ff:ff:ff, you put the the IPv6 address of the linux bridge 2001:41d0:6960:85f::1.

So OPNsense (or PFsense) IPv6 should look like this
IPv6: 2001:41d0:6960:85f:1000::1/64
Gateway: 2001:41d0:6960:85f::1

Now you can add the VIPs you would like and now they will be able to route to the internet.
2001:41d0:6960:85f:1000::69/64
2001:41d0:6960:85f:1000::70/64
2001:41d0:6960:85f:1000::71/64

You can test these work by going in to the console of OPNsense (or PFsense) and doing the following command, which will ping Cloudflare's IPv6 DNS server, from your chosen IPv6 address
ping -S 2001:41d0:6960:85f:1000::69 2606:4700:4700::1111

I hope this post helps someone else on Kimsufi(KS), SoYouStart(SYS), OVH. Who's using Proxmox as their hypervisor and wants to use OPNsense (or PFsense) as their Firewall with IPv6 support.

EDIT: Just setup another SoYouStart server by doing it like this, works a treat!

Hey @Jonny, thanks for sharing your solution, it's the one that got me the furthest on my quest to support IPv6 on my containers. I do have a problem though and wondered if you'd be able to help?

I've followed along and can get IPv6 connectivity from the Proxmox host and the pfSense guest (I can SSH into the host using it's IPv6 address for example and can ping ipv6.google.com and the v6 address for CloudFlare's DNS on both) so all seems good so far. Initially, I couldn't get the pfSense guest to ping out to any public IPv6, but I solved this using ip6tables on the Proxmox host to accept all the Proxmox required ports and then DNAT the rest of the traffic to the pfSense guest:
Code:
# Generated by ip6tables-save v1.8.2 on Sun Mar 07 02:47:16 2021
*nat
:PREROUTING ACCEPT [36:3216]
:INPUT ACCEPT [2:144]
:OUTPUT ACCEPT [2:192]
:POSTROUTING ACCEPT [2:192]
-A PREROUTING -i vmbr0 -p icmp -j ACCEPT
-A PREROUTING -i vmbr0 -p tcp -m multiport --dports 25,2003,3128,8006,5900:5999,60000:60050 -j ACCEPT
-A PREROUTING -i vmbr0 -p udp -m multiport --dports 111,5404,5405 -j ACCEPT
-A PREROUTING -i vmbr0 -j DNAT --to-destination {My-IPv6-block}::2
-A POSTROUTING -s {My-IPv6-block}::/64 -o vmbr0 -j MASQUERADE
COMMIT
# Completed on Sun Mar 07 02:47:16 2021
# Generated by ip6tables-save v1.8.2 on Sun Mar 07 02:47:16 2021
*raw
:PREROUTING ACCEPT [891:183290]
:OUTPUT ACCEPT [689:228149]
-A PREROUTING -i fwbr+ -j CT --zone 1
COMMIT
# Completed on Sun Mar 07 02:47:16 2021
# Generated by ip6tables-save v1.8.2 on Sun Mar 07 02:47:16 2021
*filter
:INPUT ACCEPT [22:1552]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [433:147845]
-A INPUT -i vmbr0 -p tcp -m tcp --dport 22 -j DROP
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sun Mar 07 02:47:16 2021

This seems to only enable the pfSense guest to ping out, but incoming pings don't seem to get to the pfSense guest.

I also have an issue with getting Proxmox containers to ping out as well. I created a third IPv6 address on the OVH dashboard and statically assigned it to a Proxmox container for testing, I've tried setting the gateway on the container to the pfSense's IPv6, the Proxmox host's IPv6 and also the OVH gateway IPv6 too, but ping6 always returns "Destination host unreachable" when pinging the IPv6 for the CF DNS. It also can't ping the pfSense guest either :(

I'm at a loss really on how I can progress, do you have any ideas?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!