Hetzner and failover IP

decibel83

Renowned Member
Oct 15, 2008
210
1
83
Hi.
I installed a Proxmox host on an Hetzner server, and I succesfully configured the additional IP addresses on a bridged network interface using the virtual MAC address given by Hetzner.
I also bought a failover IP which is a single IP address with netmask 255.255.255.255 and it cannot have a virtual MAC address because it's routed on the main IP address of the physical server, and not on the additional one.
I tried to get the failover IP working on the virtual machine, but without any success.

Could you help me please?

Thank you very much!
Bye.
 
For every failover IP address, you'll have to create route in node.
Example: ip r add FAIL.OVER.IP.ADDRESS/32 dev vmbr0
Actual problem is, if yo want to do live migration between cluster nodes, then you have to:
1) migrate VM
2) change default route inside VM
3) remove route from source node
4) add route to destination node
5) change route for this ip to new node in hetzner robot interface, or ropot api

I have created debian package (my proxmox is installed to debian servers) what does exactly that (hetzner specific).
Not recommended to use by beginners, use at your own risk.
Also: READ! readme file first, by default nothing is done, script expects configuration (vm failover ip addresses, physical node addresses, etc)
Should work on debian 8 and debian 9.
Code:
wget -O - https://deb.inlink.ltd/repo.key | apt-key add -
echo 'deb https://deb.inlink.ltd/debian stretch main' > /etc/apt/sources.list.d/inlink.list
apt update ; apt install clusterhelper

Code:
# README #


# This is proxmox cluster helper for Hetzne specific network configuration.
# It should be installed on every host and every VM that has public failover IP address.
# It will help to create routes inside VM's and hosts in case of VM migration.

# First, install ca-certificates and apt-transport-https package, since deb.inlink.ltd uses LetsEncrypt ssl certificate, their CA does not come by default with Debian.
# Then add inlink repository to hosts and VM's apt config.
# This procedure is same for cluster hosts and virtual machines.

$ apt install ca-certificates apt-transport-https
$ wget -O - https://deb.inlink.ltd/repo.key | apt-key add -
$ echo 'deb https://deb.inlink.ltd/debian jessie main' > /etc/apt/sources.list.d/inlink.list
$ apt update ; apt install clusterhelper



# If you installed this on VM, you must create hostlist configuration file, since VM has no idea what hosts are in cluster.
# Replace X's with your cluster node addresses.

$ echo -e "XXX.XXX.XXX.XXX\nXXX.XXX.XXX.XXX\nXXX.XXX.XXX.XXX\n" > /opt/inlink/clusterhelper/hostlist.conf




# If you installed it on cluster host, then you must have VM iplist configuration file in every cluster node.
# Since proxmox cluster synchronizes /etc/pve/ folder to every cluster node, create VM list in there:

$ echo -e "#First VM description\nXXX.XXX.XXX.XXX\n#Second VM description\nXXX.XXX.XXX.XXX\n" > /etc/pve/vmlist.conf

# In future, when adding failover IP addresses to cluster, you must add those IP addresses also to vmlist.conf file.
# In cluster host, you must also create Hetzner api helper script, this can be stored also in /etc/pve/ so it wil be replicated automatically to every cluster host.
# Example helper script /etc/pve/hetzner.sh (replace username and password with Hetzner webinterface username and password.)

------

cat <<'EOF'>> /etc/pve/hetzner.sh
#!/bin/bash

/usr/bin/curl -s -u XXXXXXX:XXXXXXX https://robot-ws.your-server.de/failover/${1} -d active_server_ip=${2}
EOF

------

# Files in folder /etc/pve/ cannot be made executable, so do't do that.
# Clusterhelper program will execute hetzner.sh regardless of execute bit existence.
 
Hi gallew,

I'm also on Debian (Installed Proxmox from the ISO image, so it came as a full OS + package)

How did you setup live migration with Hetzner?

Did you setup a separate server for shared storage?

Hetzner does not offer any iSCSI storage, so that's a bummer.

We currently have 1 Dell server and don't have a cluster.

Did you also choose Dell hardware or using Hetzner's own (cheap) machines?
 
Oh, that's is a long topic.
Cluster in hetzner is set up using udp, since hetzner does not support multicast.
Fore shared storage, i use NFS on every machine, but i use NFS storage only for live migration.
I move vm disks from ZFS to NFS, then migrate server, and then on new node, move disks from NFS to ZFS.
So vm runs from NFS storage.
And i replicate VM storages between nodes, so in case of failure, i can start vm in other node.
I use hetzners own machines, so farre there are no performance problems with them, but it depends what you run there.
 
Hello guys, late answer, but useful regarding a working failover ip on a ct/vm and proxmox on hetzner.

here is the configuration for the bridge on the host.

root@p1 ~ # cat /etc/network/interfaces

auto lo
iface lo inet loopback
iface lo inet6 loopback

auto enp3s0 iface enp3s0 inet manual

auto vmbr0
iface vmbr0 inet static
# MAIN_IP, NETMASK, GW, BC PROVIDED BY HETZNER ROBOT INTERFACE
address [MAIN_IP]
netmask [NETMASK]
gateway [GW]
broadcast [BC]
bridge_ports enp3s0
bridge_stp off
bridge_fd 1
bridge_hello 2
bridge_maxage 12
# first additional ip is directly configured in my configuration on a VM, with MAC provided by hetzner robot
# up ip route add [ADD_IP]/[NETMASK] dev vmbr0
# failover ip! THIS MUST BE CONFIGURED HERE
up ip route add [FAILOVER_IP] dev vmbr0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr0/proxy_arp
pre-down echo 0 > /proc/sys/net/ipv4/ip_forward
pre-down echo 0 > /proc/sys/net/ipv4/conf/vmbr0/proxy_arp

auto vmbr1
iface vmbr1 inet manual
bridge_ports none
bridge_stp off
bridge_fd 0





Here is the configuration for the vm/container


auto lo
iface lo inet loopback

auto eth0

iface eth0 inet static
address [FAILOVER_IP]
netmask 255.255.255.255
# --- BEGIN PVE ---
post-up ip route add [MAIN_IP] dev eth0
post-up ip route add default via [MAIN_IP] dev eth0
pre-down ip route del default via [MAIN_IP] dev eth0
pre-down ip route del [MAIN_IP] dev eth0
# --- END PVE ---

auto eth1
iface eth1 inet dhcp

iface eth1 inet6 dhcp


In a multinode setup, I use the same HOST configuration on all nodes, Hetzner won't complain for a configured not working additional route.
Still working on a migration script that will handle the gateway change (main_ip node1 -> main_ip node2) for the migrated container with the failover ip.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!