Doubt about optimal network configuration on Hetzner: Combining public and private subnets

jossemi

Member
Apr 11, 2016
2
0
6
41
Spain
Hi guys,

I’ve got configure network with a public ip and public and private subnets. Apparently it works (with ipv4) but I don’t know if this configuration is the best. I’m a noob with this so I would be grateful If someone with more experience could give me an opinion or advise me:

Code:
### Hetzner Online GmbH - installimage

# All Ip's are fictitious

# Loopback device:
auto lo
iface lo inet loopback
iface lo inet6 loopback

# device: eth0
auto  eth0
iface eth0 inet static
  address   5.9.49.68
  netmask   255.255.255.255
  gateway   5.9.49.161
  pointopoint    5.9.49.161

iface eth0 inet6 static
  address 2a01:4f8:167:5471::2
  netmask 128
  gateway fe80::1
  up sysctl -p

# device: vmbr0
auto vmbr0
iface vmbr0 inet static
  address   5.9.49.68
  netmask   255.255.255.255
  bridge_ports none
  bridge_stp off
  bridge_fd 0

  # Public subnet block
  up route add -host 5.9.51.158/32 dev vmbr0
  up route add -host 5.9.51.159/32 dev vmbr0
  up route add -host 5.9.51.160/32 dev vmbr0
  up route add -host 5.9.51.161/32 dev vmbr0
  up route add -host 5.9.51.162/32 dev vmbr0
  up route add -host 5.9.51.163/32 dev vmbr0
  up route add -host 5.9.51.164/32 dev vmbr0
  up route add -host 5.9.51.164/32 dev vmbr0

iface vmbr0 inet6 static
  address 2a01:4f8:167:5471::2
  netmask 64

# device: vmbr192
auto vmbr192
iface vmbr192 inet static
  address 192.168.1.1
  netmask 255.255.255.0
  bridge_ports none
  bridge_stp off
  bridge_fd 0
  post-up iptables -t nat -A POSTROUTING -s '192.168.1.0/24' -o vmbr0 -j MASQUERADE
  post-down iptables -t nat -D POSTROUTING -s '192.168.1.0/24' -o vmbr0 -j MASQUERADE

  # I detected after 3 or 4 hours containers lost connectivity to the internet.
  # Adding these lines connectivity comes again.
  post-up iptables -t nat -A POSTROUTING -s '192.168.1.0/24' -o eth0 -j MASQUERADE
  post-down iptables -t nat -D POSTROUTING -s '192.168.1.0/24' -o eth0 -j MASQUERADE

# iface vmbr192 inet6 static
# ...
# I don't know how to configure IPv6 connectivity using IPv6 block offered by Hetzner on vmbr192

pveversion -v
Code:
pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
pve-kernel-4.4.13-1-pve: 4.4.13-56
pve-kernel-4.4.8-1-pve: 4.4.8-52
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-42
qemu-server: 4.0-83
pve-firmware: 1.1-8
libpve-common-perl: 4.0-70
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-55
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-19
pve-container: 1.0-70
pve-firewall: 2.0-29
pve-ha-manager: 1.0-32
ksm-control-daemon: not correctly installed
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1

Sincerely,
Josemi
 
Last edited:
# device: eth0
auto eth0
iface eth0 inet static
address 5.9.49.68
netmask 255.255.255.255
gateway 5.9.49.161
pointopoint 5.9.49.161

.....
# device: vmbr0
auto vmbr0
iface vmbr0 inet static
address 5.9.49.68
netmask 255.255.255.255
bridge_ports none
bridge_stp off
bridge_fd 0

Hetzner is known configuring a little bit tricky (and I am not familiar with it); from this apart:

Strange that two interfaces (eth0 , vmbr0) have both the same ip address assigned. From my point of view only eth0 should have it as well as only eth0 should occur in the "iptables" command.

To understand the configuration completely it would be necessary to know how containers resp. VMs are configured.
 
Thank you for your answer Richard. You've reason I came from OVH and proxmox configuration (in general) was a little less complicated than Hetzner

I take part of the previous configuration from Hetzner's manual: https://wiki.hetzner.de/index.php/Proxmox_VE/en#Network_configuration_host_system_KVM.2FRouted (Network configuration host system KVM/Routed). As you can see Hetzner recommend users to use two interfaces with same IP in routed mode.

Thanks for helping me! (i'm a noobman :))
Josemi
 
@jossemi
As the post is a bit older: Do you still need help (also in German if easier) with the Hetzner config?

We have several hosts (small to superlarge, including working Ceph) running at Hetzner. Usually we rout the whole subnets and not every IP address on it's own. This is way more simple to configure and handle and actually straight forward.
 
If you need help, please write me a PM. I can help with the setup via private billing.

Info:
We usually run a local haproxy before the 8006 port, for some reason the connection is way more stable and faster than running it directly through 8006.

"FOS***" = FailOverSubnet
"LAN" = LAN between hosts via private switch

This config is from a PVE cluster with 3 nodes, all having FOS subnets, so that we just have to redirect the FOS subnet to another host and start VMs: Online again. This can be combined with loadbalacing or HA webservers, all having several IPs, beeing routed via LAN if not directly attached.

One basically then just connects VMs to the bridges (e.g. vmbr101), assigns an IP from the subnet and is good to go.

Code:
auto lo
iface lo inet loopback
iface lo inet6 loopback

iface eth0 inet manual
iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
        address  1.2.3.4
        netmask  255.255.255.192
        gateway  1.2.2.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
#HOST & WAN

#iface vmbr0 inet6 static
#        address  2a01:WHATEVER
#        netmask  64
#        gateway  fe80::1

auto vmbr1
iface vmbr1 inet static
        address  192.168.1.1
        netmask  255.255.255.0
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0
#LAN

auto vmbr88
iface vmbr88 inet static
        address 192.168.5.1
        netmask 255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0
#VPN - Backup

auto vmbr101
iface vmbr101 inet static
        address  2.2.2.2
        netmask  255.255.255.248
        bridge_ports none
        bridge_stp off
        bridge_fd 0
#FOS101

auto vmbr102
iface vmbr102 inet static
        address  3.3.3.3
        netmask  255.255.255.248
        bridge_ports none
        bridge_stp off
        bridge_fd 0
#FOS102

auto vmbr103
iface vmbr103 inet static
        address  4.4.4.4
        netmask  255.255.255.248
        bridge_ports none
        bridge_stp off
        bridge_fd 0
#FOS103

This is how it looks like in the Hetzner robot, first IP is network address, last is broadcast.
The first useable IP in the subnet belongs to the host itself beeing the default GW for the net.

upload_2017-4-21_11-4-13.png
 
Last edited:
Interesting config. How do you use HE because of storage? Do you have 10GB LAN on private Switch and separate ZFS Storage Servers?

In this setup we don't, we have several webservers running using a nested hosted GlusterFS storage net routed via LAN as storage backend. Otherwise one can work with VLANs to seperate storage network from private LAN.

We don't use 10G yet, it's actually very expensive when looking at the average Hetzner pricing for servers. We'd rather run VMs in HA than having to rely on HA storage backend. The nested net/storage backend takes performance for sure, but since all servers run SSDs and or NVMe cards, this is totally fine and the 1G additional network cards are more than enough here.

It's cheaper to have several "ready build" servers in small clusters than bigger ones with a centralized storage, which one hast to manage as well.

Running the central storage backend nested keeps us flexible and migrations can be done very quick and easy.
 
It is not important how do you use IP address. What is important is how do you setup traffic rules. Any iptables firewalls do no care about private/internet ip address.
vlan's is good thing, but is a very dificult to setup in the wright way. My own opinion is that is more simple to use something like dinamyc routing (ospf) to segregate private IP from Wan/Internet IP. Guess that with ospf, you will also have redundancing routes, and a faster proxmox VM relocation (compares with the same setup without ospf, nanoseconds compares with seconds)
ospf do not add any info at L2 like vlan do. It is also reliable (from A to B not work, no problem if we have another redundant route from A to B via C
 
In this setup we don't, we have several webservers running using a nested hosted GlusterFS storage net routed via LAN as storage backend. Otherwise one can work with VLANs to seperate storage network from private LAN.

We don't use 10G yet, it's actually very expensive when looking at the average Hetzner pricing for servers. We'd rather run VMs in HA than having to rely on HA storage backend. The nested net/storage backend takes performance for sure, but since all servers run SSDs and or NVMe cards, this is totally fine and the 1G additional network cards are more than enough here.

It's cheaper to have several "ready build" servers in small clusters than bigger ones with a centralized storage, which one hast to manage as well.

Running the central storage backend nested keeps us flexible and migrations can be done very quick and easy.



Nice to use Hetzner, but without zfs you can not bet that your data is safe. On Hetzner I has 3 events with them: one hdd was broken (zfs save me) and in 2 situations the Hetznet ssd intel data center show some eror on a regular zfs scrub. And if a disk is broken, it is the Hetzner job to replace, not for admin. Like many others, they know to take the money, but they forget to do their duty, clients can do their task, because is cheap (=0 euro)
 
Last edited:
Nice to use Hetzner, but without zfs you can not bet that your data is safe. On Hetzner I has 3 events with them: one hdd was broken (zfs save me) and in 2 situations the Hetznet ssd intel data center show some eror on a regular zfs scrub. And if a disk is broken, it is the Hetzner job to replace, not for admin. Like many others, they know to take the money, but they forget to do their duty, clients can do their task, because is cheap (=0 euro)

For us Hetzner works just fine, drives are beeing replaced free of any charge and autmatically when broken. They even write us an email to ask for an appropriate time to change the drives and that we should shutdown the server in advance.
We use HW raid and SW raid, nothing else. Never have had any problems, even with broken drives, everything is up and running.

How do you expect Hetzner to know that there is a diskerror if it's till up and running? I mean you don't want them to scan your disks for errors, because that would require them to actually monitor your discs, having access to your data. They monitor if there is a disk error and if you run a SW raid you should do monitoring yourself.
If you want fully managed servers you can pay for them, otherwise those very cheap servers are just fine and an admin should know what he is doing, meaning that he should monitor the systems and notify Hetzner is this case to fix it. Hetzner never let us down, always fixed fast what was broken, within Minutes, never hours!
 
It is not important how do you use IP address. What is important is how do you setup traffic rules. Any iptables firewalls do no care about private/internet ip address.
vlan's is good thing, but is a very dificult to setup in the wright way. My own opinion is that is more simple to use something like dinamyc routing (ospf) to segregate private IP from Wan/Internet IP. Guess that with ospf, you will also have redundancing routes, and a faster proxmox VM relocation (compares with the same setup without ospf, nanoseconds compares with seconds)
ospf do not add any info at L2 like vlan do. It is also reliable (from A to B not work, no problem if we have another redundant route from A to B via C

Well that's why we use a seperate NIC to seperate it on a physical level. Cluster communication should never go via public network.
Setting up routes via OSPF is fine, but it requires the knowleadge to do so and is usually more complex for the average admin to understand (our clients are "average admins"). So we take the simple approach and it's working fine.
Additionally Hetzner has Firewalls for every vHost and Host sine a few month including some basic DDOS defend mechanism.

Can you post an example of your OSPF rules so we can all learn from it? :)
 
How do you expect Hetzner to know that there is a diskerror if it's till up and running? I mean you don't want them to scan your disks for errors, because that would require them to actually monitor your discs, having access to your data. They monitor if there is a disk error and if you run a SW raid you should do monitoring yourself.
If you want fully managed servers you can pay for them, otherwise those very cheap servers are just fine and an admin should know what he is doing, meaning that he should monitor the systems and notify Hetzner is this case to fix it. Hetzner never let us down, always fixed fast what was broken, within Minutes, never hours!

It was a fully managed servers(3 pcs). They could see SMART(iDrac or echivalent). In my case they promise that the replace hour will be xx:yy, and I have need to wait 3 hours +. But maybe it was only several bad luck ;)
 
Well that's why we use a seperate NIC to seperate it on a physical level. Cluster communication should never go via public network.
Setting up routes via OSPF is fine, but it requires the knowleadge to do so and is usually more complex for the average admin to understand (our clients are "average admins"). So we take the simple approach and it's working fine.
Can you post an example of your OSPF rules so we can all learn from it? :)

No, because involve some hardware routers(who can speak ospf language). But I can explain or at least I will try ;)

The clients has the default gw a such HW router. Ospf has null route for any restricted networks(like storage lan), for any normal client. So a client can not reach any restricted networks, because they do not have a route to this types of LANs! The same is for storage network(2 x HW router, aslo ospf capable).
So nothing complicated or fancy. No firewalls, no vlans. But I must admit, this solution can not fit for any IT landscape, but for me was good. Anyway is more complicated to make / setup VLANs compared with a (simple)ospf setup. It is not the same thing as result(ospf/VLANs), but I prefer ospf if I have the chance to chose... ;) OSPF is very dynamic(VLANs are mostly static), so I can reconfigure a entire network in several minutes(let say I want to change default gw, I want to change some route paths, or some route costs). Another good thing - I do not need any bonding interfaces on storage LAN(ospf redundant path with different costs).
 
@DerDanilo
If u have configured a vmbrX to use a subnet, this would mean that every VM with this bridge coud use any of the IPs in the subnet. This is kind of bad because a "VM client" could change the IP by himself to a different one in the same subnet, right?
 
@DerDanilo
If u have configured a vmbrX to use a subnet, this would mean that every VM with this bridge coud use any of the IPs in the subnet. This is kind of bad because a "VM client" could change the IP by himself to a different one in the same subnet, right?

That is correct, but we are the only ones that configure the VMs, none of our clients have access to them in a way to be able to change the IPs.
This solution is not for VPS reselling, but works great for private clouds. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!