Proxmox VE 5 at OVH using fail-over IPs

Marc Ballat

Renowned Member
Dec 28, 2015
38
7
73
57
I have been forced to reinstall Proxmox on my dedicated server at OVH last week and took the opportunity to migrate from 4.4 to 5. Note that I install Proxmox on top of Debian 9.4.

Windows VM, private IP, NAT/PAT works fine.

Debian LXC container, fail-over IP, doesn't work (I can't get it to work ;-).

Host /etc/network/interfaces :
Code:
auto lo
iface lo inet loopback

iface eno1 inet static
        address  94.XXX.XXX.XXX
        netmask  255.255.255.0
        gateway  94.XXX.XXX.254

# This one is used for a private subnet on which VMs and containers use NAT/PAT to be reachable.
auto vmbr0
iface vmbr0 inet static
        address  172.16.0.1
        netmask  255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up /sbin/iptables -t nat -A POSTROUTING -s '172.16.0.0/24' -o eno1 -j MASQUERADE
        post-up /sbin/iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 10139 -j DNAT --to 172.16.0.101:3389
        post-up /sbin/iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 10622 -j DNAT --to 172.16.0.106:22

# First IP failover.
auto vmbr1
iface vmbr1 inet static
        address  178.XXX.XXX.XXX
        netmask  255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0

# Second IP failover.
auto vmbr2
iface vmbr2 inet static
        address  91.XXX.XXX.33
        netmask  255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0

# Third IP failover.
auto vmbr3
iface vmbr3 inet static
        address  91.XXX.XXX.95
        netmask  255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0

Debian 9.4 LXC container's configuration :
Code:
Name : eth0
Mac : 02:00:00:7c:aa:90 (virtual MAC taken from OVH's administration console)
Bridge : vmbr2
IPV4 : 91.XXX.XXX.33/32
Gateway : 94.XXX.XXX.254
IPV6 : none

This gives the following /etc/network/interfaces :
Code:
auto lo
iface lo inet loopback

auto eth0
#       dns-nameservers 127.0.0.1 213.186.33.99
#       dns-domain mydomain.com
iface eth0 inet static
        address 91.XXX.XXX.33
        netmask 255.255.255.255
# --- BEGIN PVE ---
        post-up ip route add 94.XXX.XXX.254 dev eth0
        post-up ip route add default via 94.XXX.XXX.254 dev eth0
        pre-down ip route del default via 94.XXX.XXX.254 dev eth0
        pre-down ip route del 94.XXX.XXX.254 dev eth0
# --- END PVE ---

And the result is (roulement de tambour) : nothing ! I can't ping the host, I can't ping the gateway and of course even less the outside world. And I cannot find what is wrong with my configuration.

Marc
 
I finally solved it on my own. I know there are plenty of people out there who know how to do it but if only one is lost like I was over the last week, here is my configuration.
Code:
auto lo
iface lo inet loopback

iface eno1 inet static
        address  94.xxx.xxx.223
        netmask  255.255.255.0
        gateway  94.xxx.xxx.254
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up /sbin/iptables -t nat -A POSTROUTING -s '172.16.0.0/24' -o eno1 -j MASQUERADE
        post-up /sbin/iptables -t nat -A PREROUTING-i eno1 -p tcp --dport 10139 -j DNAT --to 172.16.0.101:3389
        post-up /sbin/iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 10622 -j DNAT --to 172.16.0.106:22

auto vmbr0
iface vmbr0 inet manual
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
#Use this bridge for machines with a fail-over IP.

auto vmbr1
iface vmbr1 inet static
        address  172.16.0.1
        netmask  255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0
#Use this bridge for machines with a private IP address.  'route add default gw' necessary from within the VM !

VMs will make use of vmbr1 if they do not need a public IP address. iptables can even make them reachable from the internet. But what if I want a Debian 9.4 VM with two interfaces, one with a public IP and the other with a private one ? Using Proxmox's web interface, do the following :
  • add a network interface called eth0 using vmbr0 as bridge, OVH's fail-over IP/32 (unless it is a block of several IPs), OVH's virtual MAC for that very IP and finally the gateway of eno1
  • add a network interface called eth1 using vmbr1 as bridge, 172.16.0.xxx/24 as IP and 172.16.0.101 as gateway
  • start the VM and add the following lines to /etc/network/interfaces in order to fix routing (by now it does not work as expected, i.e. it works only if the commands are executed from the command line)
Code:
eth0:
        post-up route del default gw 172.16.0.1 dev eth1
        post-up route add 94.xxx.xxx.254 dev eth0
        post-up route add default gw 94.xxx.xxx.254
        pre-down route del default gw 94.xxx.xxx.254
        pre-down route add default gw 172.16.0.1 dev eth1
eth1:
        post-up route add -net 172.16.0.0/24 dev eth1
        pre-down route del -net 172.16.0.0/24 dev eth1
 
After many hours of tweaking, I finally found a configuration that works. I found it quite difficult to progress as network related knowledge is quite specific. I mean that, unless you studied networking, it is fairly difficult to apply the experience gained in another domain to networking. I hope that the solution below can save other people's time.

OVH has given me one main IP address and four fail-over IP addresses. I also want to connect my VM's and containers to a private subnet : 172.16.0.0/24.

Proxmox VE Host

Here is how my /etc/network/interfaces looks like on the host.
Code:
auto lo
iface lo inet loopback

iface eno1 inet static
        post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp

auto vmbr0
iface vmbr0 inet manual
        address  94.xxx.xxx.223
        netmask  255.255.255.0
        gateway  94.xxx.xxx.254
        pointopoint 94.xxx.xxx.223
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr0/proxy_arp
#Use this bridge for machines with a fail-over IP.


auto vmbr1
iface vmbr1 inet static
        address  172.16.0.1
        netmask  255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        #post-up ip route flush table main
        post-up ip route add default gw vss-gw-6k.fr.eu dev vmbr0
        post-up ip route add 172.16.0.0/24 dev vmbr1
        post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr1/proxy_arp
#Use this bridge for machines with a private IP address.  'route add default gw' necessary from within the VM !

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up /sbin/iptables -t nat -A POSTROUTING -s '172.16.0.0/24' -o vmbr0 -j MASQUERADE
        post-up /sbin/iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 10139 -j DNAT --to 172.16.0.101:3389
        post-up /sbin/iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 10622 -j DNAT --to 172.16.0.106:22

This is the physical interface. I think it used to be eth0 in Debian 8 but it has changed in Debian 9 to eno1.
Code:
iface eno1 inet static
        post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
This is the bridge used for VMs and containers that use a public IP (single fail-over or a range). It holds the main server's IP address and points to OVH's gateway.
Code:
auto vmbr0
iface vmbr0 inet manual
        address  94.xxx.xxx.223
        netmask  255.255.255.0
        gateway  94.xxx.xxx.254
I am not sure this bit (pointopoint) is necessary and didn't research its use. bridge-ports is necessary to bridge vmbr0 with eno1 and make it work.
Code:
        pointopoint 94.xxx.xxx.223
        bridge-ports eno1
This is the bridge for the private subnet. I tried many different things in order to get the routing to work for containers with both a private and a fail-over IP, for containers with just a private IP or containers with just a public IP. It comes down to routing on the host as far as I can judge. Beware that there is a difference between a working routing table and the same table after a reboot : I got the routing to work by manipulating the table with ip route and route but my server failed to respond after a reboot. I had to rescue-boot it and edit /etc/network/interfaces. I left the flush command commented out so it servers as a WARNING !
I do not understand what causes the IP of the gateway to be turned into a hostname (vss-gw-6k.fr.eu). Never mind.
Code:
auto vmbr1
iface vmbr1 inet static
        address  172.16.0.1
        netmask  255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        #post-up ip route flush table main
        post-up ip route add default gw vss-gw-6k.fr.eu dev vmbr0
        post-up ip route add 172.16.0.0/24 dev vmbr1
        post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr1/proxy_arp
The first command ensures that NAT works for VMs on the private subnet.
The second one does port forwarding from the host to a Windows VM for RDP.
The last one does port forwarding from the host to a Linux container for SSH.
Code:
post-up /sbin/iptables -t nat -A POSTROUTING -s '172.16.0.0/24' -o vmbr0 -j MASQUERADE
        post-up /sbin/iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 10139 -j DNAT --to 172.16.0.101:3389
        post-up /sbin/iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 10622 -j DNAT --to 172.16.0.106:22

Network settings for VMs and containers.

If you wish to add an interface with a private IP (172.16.0.xxx in this example), connect it to vmbr1, give it e.g. 172.16.0.106/24 as IP and 172.16.0.1 as gateway.
If you wish to add an interface with a public IP, connect it to vmbr0, give it a fail-over IP and the corresponding virtual MAC address obtained from OVH's manager and 94.xxx.xxx.254 as gateway.

Good luck !