Route failover IP to container

Dec 10, 2019
10
0
6
Hi everybody,

I am totally lost with a problem that bugs me since a couple of days, but despite reading numerous posts/sites, I wasn't able to solve it (Sorry in advance if this was answered here somewhere and I just didn't get it)

We have a small two-node Proxmox cluster runinng Proxmox 6.2 with Hetzner. The servers are bound to a Hetzner vswitch with a public subnet. The containers are assigned IPs from this public subnet. This works pretty well. However, now I need to route a Hetzner failover IP which is currently pointing to an older - soon to be discarded - server that is serving a couple of webites (80,443) to a container in the proxmox cluster.

The failover IP must be accessible in addition to the normal IP of the container. Hetzner failover IPs do not have MAC addresses and can only be routed to the main IPs of dedicated servers, not to an arbitrary IP like a container IP. Plus the traffic of the failover IP must be routed through the interface with the main IP. Unfortunately, despite all my tries I couldn't get this to work. Traffic to the Failover IP ends on the HOST server and doesn't get routed to the container.

At the moment I am trying to route the IP to the container, alternatively it would be sufficient to forward only port 80 and 443 to the container.

This is my current (non- working) try. Does anyone perhaps a similiar setup and give some hint? I am totally out of ideas now:

Host configuration:

Code:
auto lo
iface lo inet loopback

iface enp35s0 inet manual

iface enp35s0.4044 inet manual
        mtu 1400

auto enp39s0
iface enp39s0 inet static
        address <LOCAL CLUSTER IP>

auto vmbr0
iface vmbr0 inet static
        address <SERVER MAIN IP>/32
        gateway <SERVER MAIN GATEWAY>
        bridge-ports enp35s0
        bridge-stp off
        bridge-fd 0
        pointopoint <SERVER MAIN GATEWAY>

iface vmbr0 inet6 static
        address <SERVER MAIN IP>
        gateway fe80::1

auto vmbr1
iface vmbr1 inet manual
        bridge-ports enp35s0.4044
        bridge-stp off
        bridge-fd 0
        mtu 1400

auto vmbr2
iface vmbr2 inet static
    address <FAILOVER IP>
    netmask 255.255.255.255
    bridge_ports none
    bridge_stp off
    bridge_fd 0
    up ip route add <FAILOVER IP>/32 dev vmbr0

Container configuration:

Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address <VSWITCH SUBNET IP>/28
        gateway <VSWITCH SUBNET GATEWAY>
        mtu 1400

iface eth0 inet6 static
        address <VSWITCH SUBNET IP>
        gateway <VSWITCH SUBNET GATEWAY>
        mtu 1400

auto eth1
iface eth1 inet static
        address <FAILOVER IP>/32
# --- BEGIN PVE ---
        post-up ip route add <SERVER MAIN IP> dev eth1
        post-up ip route add default via <SERVER MAIN IP> dev eth1
        pre-down ip route del default via <SERVER MAIN IP> dev eth1
        pre-down ip route del <SERVER MAIN IP> dev eth1
# --- END PVE ---
        pointopoint <SERVER MAIN IP>

Any help is highly appreciated! Thanks
 
Last edited:
Hi,

to help you there are more information needed.

Code:
ip -c a
ip -c -br a
pct config <vmid>
 
Hi Wolfgang,

thanks a lot for helping out! Meanwhile I have resorted to a much simpler configuration in which I created a new vmbr interface for the failover IP and configured rinetd to foward ports 80 and 443 to the container vm (instead of iptables forwarding because it is only about a dozens or so websites and rinted is much simpler than iptables).
I am still a little puzzled why it is working and more importantly I am unsure if this satisfies Hetzner's rule that only packets with the server NIC's mac adress are allowed to leave the server.

This is the current configuration:
(I have replaced the real IPs with the following substitues to make reading easier than simply removing the IPs)
Main IP Promox host: 110.120.130.140
Gateway Proxmox host: 110.120.130.99
vswitch public subnet IP: 66.77.88.99
vswitch public subnet gateway: 66.77.88.1
Failover IP: 176.177.178.179
Plus I have removed the ipv6 parts

Code:
auto lo
iface lo inet loopback

iface enp35s0 inet manual

iface enp35s0.4044 inet manual
        mtu 1400

# Main IP
auto vmbr0
iface vmbr0 inet static
        address 110.120.130.140/32
        gateway 110.120.130.99
        bridge-ports enp35s0
        bridge-stp off
        bridge-fd 0
        pointopoint 110.120.130.99
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up echo 1 > /proc/sys/net/ipv4/conf/enp35s0/proxy_arp

# VSwitch
auto vmbr1
iface vmbr1 inet manual
        bridge-ports enp35s0.4044
        bridge-stp off
        bridge-fd 0
        mtu 1400

# Failover IP
auto vmbr2
iface vmbr2 inet static
    address 176.177.178.179
    netmask 255.255.255.255
    bridge_ports none
    bridge_stp off
    bridge_fd 0
    up route add --host 176.177.178.179/32 dev vmbr0

The container VM now isn't aware of the additional IP:

Code:
auto lo
iface lo inet loopback

# IP from public subnet on vswitch
auto eth0
iface eth0 inet static
        address 66.77.88.99/28
        gateway 66.77.88.1
        mtu 1400

What is somewhat surprising to me is that the server still accepts packets for the failover IP, although the vmbr2 is down. Is the route of the failover IP to vmbr0 enough to accept packets for this ip?

This is what ip -c a returns (with the changed IPs and IPv6s mangled):
Code:
lo               UNKNOWN        127.0.0.1/8 ::1/128
enp35s0          UP
vmbr0            UP             110.120.130.140 peer 110.120.130.99/32 2a01:XXXXXXX:2/64 fe80::XXXXXXXXXX/64
enp35s0.4044@enp35s0 UP
vmbr1            UP             fe80::XXXXXXXXXd/64
vmbr2            DOWN           176.177.178.179/32 fe80::XXXXXXXXX/64
veth300i0@if2    UP
fwbr300i0        UP
fwpr300p0@fwln300i0 UP
fwln300i0@fwpr300p0 UP
veth600i0@if2    UP
fwbr600i0        UP
fwpr600p0@fwln600i0 UP
fwln600i0@fwpr600p0 UP
veth310i0@if2    UP
fwbr310i0        UP
fwpr310p0@fwln310i0 UP
fwln310i0@fwpr310p0 UP

These are the relevant parts of ip -c a (with changed IPs and mangled MACs):
Code:
2: enp35s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether a8:a1:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a8:a1:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
    inet 110.120.130.140 peer 110.120.130.99/32 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 2a01:XXXXXXXXXXX/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::XXXXXXXXXX/64 scope link
       valid_lft forever preferred_lft forever
5: enp35s0.4044@enp35s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether a8:a1:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
6: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default qlen 1000
    link/ether a8:a1:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
    inet6 fe80::XXXXXXXXXXXX/64 scope link
       valid_lft forever preferred_lft forever
7: vmbr2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 176.177.178.179/32 scope global vmbr2
       valid_lft forever preferred_lft forever
    inet6 fe80::XXXXXXXXXXXX/64 scope link
       valid_lft forever preferred_lft forever

rinetd has only two forwards:
Code:
176.177.178.179 80   66.77.88.99 80
176.177.178.179 443   66.77.88.99 443

So I guess, actually the only really relevant line is the "up route add" for the failover IP. I needed the vmbr2, so I can bind rinetd to the failover IP (still I am somewhat surprised that the packets aren't dropped).

I guess in the setup, all packets without vlan tag that leave the server should use the mac address of the main interface. I watched the traffic with tcpdump and this seemed to confirm this, but is still a major concern for me. Can you perhaps offer a hint why this works at all (and please notify if this setup isn't advisable)

Thanks a lot for your help!

(And many, many thanks for providing Proxmox. Especially for very small companies like ours this opens up incredible opportunities!)
 
Last edited:
I do not understand your setting.

The vmbr0 is connected to a nic what a private address is?
The VM/CT bridge vmbr2 has no nic port and set a route to the private network.
But the VM has a gateway outside this private network.

A.F.I.K Hetzner allows only traffic from the main IP, which means all traffic has to be routed over the main IP.
But a private IP can't be correct.

Also, your config misses the proxy_arp on the main interface.
 
Thanks for the hint with the proxy_arp. I have added this to the configuration. About the IPs: sorry for the confusion, these are not the real IPs as mentioned above the config files, but I agree: The replacement was more than unsuitable. I will edit the configs above to reflect this.
Thanks for helping!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!