Hetzner vswitch and Proxmox

Petar Kozic

Member
Nov 6, 2018
13
3
23
40
Hi folks,
I'm using several dedicated server with Proxmox, all server was at Hetzner. On all that server I have added additional IP subnet and DHCP (isc-dhcp) and everything works.

Now I want to use Hetzner vswitch because there I can add several IP subnet in same VLAN.
In anyway I have problem with configuration.

I do everything by manual from Hetzner. On fresh installed proxmox server by default I have this settings:

Code:
### Hetzner Online GmbH installimage

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback
iface lo inet6 loopback

auto enp2s0
iface enp2s0 inet static
  address 78.46.xxx.xxx
  netmask 255.255.xxx.xxx
  gateway 78.46.xxx.xxx
  #route 78.46.xxx.xxx via 78.46.xxx.xxx
  up route add -net 78.46.xxx.xxx netmask 255.255.xxx.xxx gw 78.46.xxx.xxx dev enp2s0

iface enp2s0 inet6 static
  address 2a01:4f8:xxx:xxx::x
  netmask xx
  gateway fexx::1

Then by Hetzner manual I need to add subnet on vswitch over they robot interface and need to assign some vlan ID. That is done.

Than I need to do next:

Example configuration for the network card "enp0s31f6", with the VLAN ID 4000

Create a VLAN device

Code:
ip link add link enp0s31f6 name enp0s31f6.4000 type vlan id 4000
ip link set enp0s31f6.4000 mtu 1400
ip link set dev enp0s31f6.4000 up

Configure IP address 192.168.100.1 from the private subnet 192.168.100.0/24

Code:
ip addr add 192.168.100.1/24 brd 192.168.100.255 dev enp0s31f6.4000

Public subnet You need to create an additional routing table for the public subnet so you can configure another default gateway.

Example configuration for IP 213.239.252.50 from the public subnet 213.239.252.48/29, interface enp0s31f6.4000

Code:
echo "1 vswitch" >> /etc/iproute2/rt_tables
ip addr add 213.239.252.50/29 dev enp0s31f6.4000
ip rule add from 213.239.252.50 lookup vswitch
ip rule add to 213.239.252.50 lookup vswitch
ip route add default via 213.239.252.49 dev enp0s31f6.4000 table vswitch


Example Debian configuration

Interface enp0s31f6, VLAN 4000, private network
Code:
# /etc/network/interfaces
auto enp0s31f6.4000
iface enp0s31f6.4000 inet static
  address 192.168.100.1
  netmask 255.255.255.0
  vlan-raw-device enp0s31f6
  mtu 1400


When I did it, I just added vmbr0 interface

Code:
auto vmbr0
iface vmbr0 inet static
  address 192.168.100.2
  netmask 255.255.255.0
  bridge_ports enp2s0.4000
        bridge_stp off
        bridge_fd 0

When I create and install ubuntu on Proxmox and assign this IP (this is from manual, example IP, not real) 213.239.252.50 I can ping everything but I can't do apt-get update or curl. DNS resolve hostname but not traffic.

When I do traceroute from my computer to 213.239.252.50 they finish without problem.
When I try to connect over ssh, I can connect and login, but if I try to do something stop responding, on example top or ps aux.
 
  • Like
Reactions: DerDanilo
Hi!

I'm gonna have a project where I will have time to play with the Hetzner vSwitches soon.

Did you enable forwarding on the host? Not sure if it it's really required with this setup but it's still the same NIC, hence it might be required.
 
Hi @DerDanilo
thank you on your answer. Yes I setup that.

Indeed, I found problem. Problem was in mtu. Because vlan interface enp2s0.4000 have mtu 1400
I need to setup same mtu on VM interfaces. Because I handle IP assign with isc-dhcp-server I also send mtu over DHCP and evertyhing works.
 
Just tested this yesterday and running a single PVE cluster over multiple DCs is possible now and has the same stability as the whole network itself.

Already switched one bigger customer project and got rid of tinc to finally remove this overhead.
Data is flying much faster now.

Awesome. If anybody needs help with this please write me a PM.
 
I have my Proxmox cluster set up on Hetzner across 3 DC's via their VLAN. If you need any assistance getting it working then give me a shout.

I have a /29 on node 1 all traffic passed through Opnsense FW on node 1 back through the VLAN to the other guests on the private VLAN.

What got it working for me was to edit the file

/usr/share/perl5/PVE/QemuServer.pm

looking for the config block

sub print_netdevice_full {

then adding the line

$tmpstr .= ",host_mtu=1400" if $net->{model} eq 'virtio';

under the line

$tmpstr .= ",bootindex=$net->{bootindex}" if $net->{bootindex} ;

reboot the host and all VM's then have their MTU set to 1400.
 
I have my Proxmox cluster set up on Hetzner across 3 DC's via their VLAN. If you need any assistance getting it working then give me a shout.

I have a /29 on node 1 all traffic passed through Opnsense FW on node 1 back through the VLAN to the other guests on the private VLAN.

What got it working for me was to edit the file

/usr/share/perl5/PVE/QemuServer.pm

looking for the config block

sub print_netdevice_full {

then adding the line

$tmpstr .= ",host_mtu=1400" if $net->{model} eq 'virtio';

under the line

$tmpstr .= ",bootindex=$net->{bootindex}" if $net->{bootindex} ;

reboot the host and all VM's then have their MTU set to 1400.

This change might be reverted with the next update.

I change the MTU for each NIC on every VM, which works fine.

You are ok with your single point of failure?

You can use additional subnets via vswitch, which gives a lot flexebility.
 
Actually the most simply way to add a VLAN based Public routed Subnet like a /28 you can buy additionally for your vSwitch are those lines:

Code:
auto lo
iface lo inet loopback
iface lo inet6 loopback

auto enp35s0
iface enp35s0 inet static
        address  xxx.xxx.xxx.xxx
        netmask  xxx.xxx.xxx.xxx
        gateway  xxx.xxx.xxx.xxx
        up route add -net xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx gw xxx.xxx.xxx.xxx dev enp35s0

iface enp35s0 inet6 static
    address  xxxx:xxxx:xxxx::xx
    netmask  64
    gateway  fe80::1


iface enp35s0.4000 inet manual
  vlan-raw-device enp35s0
  mtu 1400


auto vmbr0
iface vmbr0 inet static
  address 10.10.10.1
  netmask 255.255.255.0
  bridge_ports enp35s0.4000
        bridge_stp off
        bridge_fd 0

I don't need any fancy route add stuff or so and now im also able to share this dedicated subnet accross my other hetzner machines or maybe even a cluster without any further routing configuration. Please make sure to change the VLAN Tag (4000) so that it matches your requirements.
As gateway for your VM/LXC use the gateway of the subnet, not the gateway of the Physical Host, this won't work.
If you change the IP of a VM/Container or add a new one it can take 1-5 mins until the machine is properly routed and available, i guess this has something to do with the MAC address and ARP cache of the vSwitch feature of hetzner ...
Hope this helps smb.
 
Last edited:
  • Like
Reactions: popescuaandrei
This change might be reverted with the next update.

I change the MTU for each NIC on every VM, which works fine.

You are ok with your single point of failure?

You can use additional subnets via vswitch, which gives a lot flexebility.

I plan on doing something similar to Iain Stott, but i want to prevent the single point of failure that you also mentioned.
I Just did a quick draft on how this problem could potentially be solved for my specific setup.
(3 physical servers running a Proxmox Cluster, each having 2 NICs. So that the servers can communicate on a private network with lower latency)

hetzner.png

However I'm not entirely sure if the HA of the pfSense would work that way on Hetzner, as they depend on CARP Virtual IP addresses, and as far as i know, that wouldn't be possible via Hetzner vSwitches (?). Only other option would be the Failover-IP/Subnets, but i'm feeling like they aren't as flexible as the setup via vSwitches. Did you by any chance ever do a setup like that on the Hetzner infrastructure?
 
Just to make things clear here. In general it's not a good idea to have the public bandwidth, Corosync, Privat VM network etc etc. running all on a single Physical NIC with VLANs as this can lead to time-out's, fancying on high load or a other behaviours. If you have around 30-40€ more to spend, you can setup a very own dedicated 1 Gbit/s Physical Switch with hetzner. This switch lives complety outside of the vSwitch scope hetzner offers, see pricing here:

https://wiki.hetzner.de/index.php/Root_Server_Hardware#Sonstiges

You will need:
3x NIC.
3x Lan Connection.
1x Switch (5-8 Port, it's the same price here for some reason).
(10 Gbits is also offered, please keep in mind that only one additional NIC or PCIe card is possible on each server)

Beside, from what i see from your setup, it seems that you want to have two ProxMox worker servers and one management node, is that right?
Your LAN setup look somehow like you would try to accomplish that as one node does not seem to be a part of the HA setup.

Maybe the following works better here:

Setup a dedicated NIC/Switch only for ProxMox, Corosync and Migration management. This is where HA stuff happends.
Setup the Public Network trough a vSwitch with additional IPv4 Subnet just for your VMs as i already described above.
Setup a OVSwitch with ipsec encryption shared secret on all three nodes to enable proper communication between your VMs, this OVSwitch can also be "VLAN" aware which again gives you much more flexibility of splitting up your virtual infra. into Network segments which are independent from each other. It's up to you if you want to use this OVSwitch also on the dedicated NIC or extra vSwitch network provided by hetzner with only 1TiB included traffic per Month.

I personally would't use the Hetzner vSwitch for ProxMox Corosync services as i'm not sure how "flawless" Anycast works here, which again is needed by Proxmox HA setup the work properly as Corosync publishes all his communication by Anycast. In the Past I also did a two node "HA" setup without any proper quorum on a Multicast network which again was a pain in the A** but it worked anyways ...
 
  • Like
Reactions: DerDanilo
Just to make things clear here. In general it's not a good idea to have the public bandwidth, Corosync, Privat VM network etc etc. running all on a single Physical NIC with VLANs as this can lead to time-out's, fancying on high load or a other behaviours. If you have around 30-40€ more to spend, you can setup a very own dedicated 1 Gbit/s Physical Switch with hetzner. This switch lives complety outside of the vSwitch scope hetzner offers, see pricing here:

https://wiki.hetzner.de/index.php/Root_Server_Hardware#Sonstiges

You will need:
3x NIC.
3x Lan Connection.
1x Switch (5-8 Port, it's the same price here for some reason).
(10 Gbits is also offered, please keep in mind that only one additional NIC or PCIe card is possible on each server)

Beside, from what i see from your setup, it seems that you want to have two ProxMox worker servers and one management node, is that right?
Your LAN setup look somehow like you would try to accomplish that as one node does not seem to be a part of the HA setup.

Maybe the following works better here:

Setup a dedicated NIC/Switch only for ProxMox, Corosync and Migration management. This is where HA stuff happends.
Setup the Public Network trough a vSwitch with additional IPv4 Subnet just for your VMs as i already described above.
Setup a OVSwitch with ipsec encryption shared secret on all three nodes to enable proper communication between your VMs, this OVSwitch can also be "VLAN" aware which again gives you much more flexibility of splitting up your virtual infra. into Network segments which are independent from each other. It's up to you if you want to use this OVSwitch also on the dedicated NIC or extra vSwitch network provided by hetzner with only 1TiB included traffic per Month.

I personally would't use the Hetzner vSwitch for ProxMox Corosync services as i'm not sure how "flawless" Anycast works here, which again is needed by Proxmox HA setup the work properly as Corosync publishes all his communication by Anycast. In the Past I also did a two node "HA" setup without any proper quorum on a Multicast network which again was a pain in the A** but it worked anyways ...

Sorry for the late reply,

I guess my diagram wasn't that clear, sorry.

1. All of my three nodes are part of the Proxmox Cluster, but only two of them would run a pfSense firewall. The pfSense on node1 would be the "master" firewall, where pfSense on node2 would be the backup firewall which takes over if node1 is down. I chose to do only two, because all of the examples i found online were using just two pfSense for HA (I never did a HA setup myself before), so i though it would be better to stick to a solution where i can look up some examples.

2. The switch drawn above the servers ("private switch") is an actual physical switch that is directly connected to the 3 nodes (on an additional NIC). As you mentioned, corosync will communicate over this private network. However, i planned on doing the private VM network via the Hetzner vSwitches, because according to the Proxmox Wiki, it is recommended to have a dedicated network for the corosync communication, so i didn't want to bloat that up with my internal VM networks.

3. You said "It's up to you if you want to use this OVSwitch also on the dedicated NIC or extra vSwitch network provided by hetzner". I've already tried if it would be possible to work with VLANs on the private switch, but without any success. I guess the switch Hetzner provides isn't able to handle VLAN tagged packets? Did you successfully connect a vlan_aware bridge to the private switch before?

4. If I may ask, what firewall did you use for your HA setup? And how did you handle the failover? Are you aware of any caveats that would be good to know for a first-timer?

Thanks a lot for your help, btw!

Disclaimer: As i clearly lack experience, I think it's important to mention that i'm not doing this setup for a client or anyone else. I started this project to learn more about virtualization and high-availability in my free-time. I'm a practical learner, and you've got to start somewhere. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!