Network Config Hetzner

pille99

Active Member
Sep 14, 2022
360
28
28
hello all
i am pretty new to proxmox but i already love it. i just have a bit of a time issue (my old server will be decom and i need to migraet all esx vm before)
i have 3 Servers from hetzner. 4 NICs (3 are only internale connected - to each other)

hetzner uses 1 NIC for Public Traffic (2 IPs with own Gateways, one IP is for the management and the other for Public Traffic with MAC Address)
how to enter the config i need in proxmox
thx guys
 
2 questions left (this guide is pretty old and not up-to-date)

1. which config i need ? i guess a bridge config ! the first VM will be opnsense which manage all income traffic and forward to the related VM, also will manage my failure IPs
2. where and how to enter 2 gateways ? the virtual nic0 is managment (which has a complete own config) and the virtual nic1 has its own config (also own gateway, mac address auth to hetzner, etc).
 
Yes, I think bridge config will be a good choice here. There is a tutorial on how to setup pfSense with a bridged model here: https://getlabsdone.com/how-to-install-pfsense-on-proxmox-step-by-step/ . I think this is just what you need?

For the second question, you should just be able to configure the NICs separately, maybe I am misunderstanding the question?
 
Yes, I think bridge config will be a good choice here. There is a tutorial on how to setup pfSense with a bridged model here: https://getlabsdone.com/how-to-install-pfsense-on-proxmox-step-by-step/ . I think this is just what you need?

For the second question, you should just be able to configure the NICs separately, maybe I am misunderstanding the question?
maybe i am not clear enough

Hetzner looks like - 1 NIC, 2 IPs

1. IP 1.2.3.4, subnet 255.255.255.224, Gateway A.B.C.D, NO MAC, for management
2. IP 9.8..6, Subnet 255.255.255.224, Gateway, Z, Y, X W, hetzner MAC (which i need to configure the interface with), this will be the public IP

in the hetzner doku:
auto vmbr0
iface vmbr0 inet static --------- if i use that as "primary" address like Number 2
address <main IP>
hwaddress <aa:bb:cc:dd:ee> # MAC address of the NIC, required since Proxmox 7.0
netmask 255.255.255.255
pointopoint <gateway IP>
gateway <gateway IP>
bridge_ports enp1s0
bridge_stp off
bridge_fd 1

# for a subnet ----------- if i use that for the management IP like number 1, the gateway is missing
auto vmbr1
iface vmbr1 inet static
address <a usable subnet IP>
netmask <netmask of the subnet>
bridge_ports none
bridge_stp off
bridge_fd 0
 
i found a solution
i am not sure it works, can you have a look at it ? it looks good to be honest

https://askubuntu.com/questions/137...erface-in-bridge-configuration-with-netplan-o

network:
version: 2
renderer: networkd
ethernets:
enp195s0:
dhcp4: false
dhcp6: false

bridges:
kvmbr0:
interfaces:
- enp195s0
addresses:
- x.x.x.x/26 # <- netmask here
- y.y.y.y/29 # <- netmask here
routes:
- to: 0.0.0.0/0
via: gx.gx.gx.gx
metric: 100
- to: nx.nx.nx.nx/26 # <- route to main IP network
via: gx.gx.gx.gx # <- via main IP gateway
metric: 100
table: 1 # <- with routing table assignment
- to: 0.0.0.0/0
via: gy.gy.gy.gy
metric: 200
- to: ny.ny.ny.ny/29 # <- route to additional IP network
via: gy.gy.gy.gy # <- via additional IP gateway
metric: 200
table: 2 # <- with routing table assignment
routing-policy: # <- routing policies for IPs networks
- from: nx.nx.nx.nx/26
table: 1 # <- appropriate routing table
- from: ny.ny.ny.ny/29
table: 2 # <- appropriate routing table
dhcp4: no
dhcp6: no
nameservers:
addresses:
- 185.12.64.2
- 185.12.64.1
- 2a01:4ff:ff00::add:1
- 2a01:4ff:ff00::add:2
parameters:
stp: true
forward-delay: 4
 
LGTM, and if its working then I dont see a problem with it. Sorry for the misunderstanding I thought you had 2 NICs...
 
LGTM, and if its working then I dont see a problem with it. Sorry for the misunderstanding I thought you had 2 NICs...
are you familar with Networking in Proxmox ?
maybe i should open a new post ? i have some issues to understand
...............................................<--3--> 172 subnet
Public IP <--1--> opnsense <--2--> 192 subnet
...............................................<--4--> 10.x subnet
(remove the points - just a space filler)
1 = vmbr1
2 to 4 are private range (gateway is 254 from each subnet which should be connected to opnsense). from my understanding i need to create new vmbr2-x, without any IPs, and i connect these to the opnsense - correct ?

how ? what kind of network i need to create ?
another questions - i have a 3 node cluster, do i need to create all Networks on each cluster ?
how to opnsense is best pratice configured ? all income on 1 IP or split to all 3 IPs ? (i have a block of failover IPs) which should be used later
thx again
 
Last edited:
I think the setup from my earlier post should be exactly what you are looking for: https://getlabsdone.com/how-to-install-pfsense-on-proxmox-step-by-step/

1664376573592.png



vmbr1 connects pfsense with the outside world, and then pfSense and the other VMs are connected via vmbr2. So you only need to create one bridge for all 3 VMs and one bridge for connecting pfSense with the internet. I dont think you really need a different subnet for each VM. Wouldn't one subnet suffice that contains all 3 VMs? Or is this a requirement you have?
 
I think the setup from my earlier post should be exactly what you are looking for: https://getlabsdone.com/how-to-install-pfsense-on-proxmox-step-by-step/

View attachment 41715



vmbr1 connects pfsense with the outside world, and then pfSense and the other VMs are connected via vmbr2. So you only need to create one bridge for all 3 VMs and one bridge for connecting pfSense with the internet. I dont think you really need a different subnet for each VM. Wouldn't one subnet suffice that contains all 3 VMs? Or is this a requirement you have?
i have a couple of more, round about 30. yes, i need segmentation. but that. it helps a lot
and how about the "cluster" functionality ? on all 3 nodes the same networks needs to be created ? is there no sync option ?

what do you think, or how do you think the 3 public IPs should be connected ? from each Node/Server to opnsense ? and from there to all subnets ? i have found only infos about standalone servers, but more complex environment, its very difficult to find. plz let me know you thoughts about "how to use opn firewall" on a 3 node Cluster (later 1 or 2 more will be added.
 
So, you have 3 available NICs for every node, that are internally connected. So it would make sense to create separate networks for storage, corosync and pfSense. This way storage migrations do not affect the corosync traffic for instance.

For the VMs you then add a network interface that bridges the pfSense network, so the VMs actually can access the internet. All the VMs should be configured to use pfSense as the gateway in the respective NIC, so their traffic goes through pfSense. If you want segmentation for the VMs you can then use VLANs for easy management of the different segments.

It would make sense to have pfSense run on an external node, that is separate from the 3 nodes in the cluster, then connect the 3 nodes via the dedicated network to pfSense. In the end you should have 3 networks, 2 of which only the nodes are connected to (storage and corosync) and then one where additionally all the VMs connect to (pfSense).

How you would want to use the IPs depends on your specific use case. What you probably want is have the public facing IP of the node be available for management. Then use the public facing IP of the pfSense node to route all VM traffic.
 
Last edited:
So, you have 3 available NICs for every node, that are internally connected. So it would make sense to create separate networks for storage, corosync and pfSense. This way storage migrations do not affect the corosync traffic for instance.

For the VMs you then add a network interface that bridges the pfSense network, so the VMs actually can access the internet. All the VMs should be configured to use pfSense as the gateway in the respective NIC, so their traffic goes through pfSense. If you want segmentation for the VMs you can then use VLANs for easy management of the different segments.

It would make sense to have pfSense run on an external node, that is separate from the 3 nodes in the cluster, then connect the 3 nodes via the dedicated network to pfSense. In the end you should have 3 networks, 2 of which only the nodes are connected to (storage and corosync) and then one where additionally all the VMs connect to (pfSense).

How you would want to use the IPs depends on your specific use case. What you probably want is have the public facing IP of the node be available for management. Then use the public facing IP of the pfSense node to route all VM traffic.
i have at the moment
- 10gb for ceph cluster
- 1gb for ceph public
- 1gb for external access
you have any suggestion what i can use the free NIC ? its also a 1gb
 
i have at the moment
- 10gb for ceph cluster
- 1gb for ceph public
- 1gb for external access
you have any suggestion what i can use the free NIC ? its also a 1gb
It would be wise to run Corosync on that NIC, so the traffic for syncing the cluster does not get interrupted by Ceph/External Traffic
 
It would be wise to run Corosync on that NIC, so the traffic for syncing the cluster does not get interrupted by Ceph/External Traffic
you are right
it uses the public ip network
is enough if i change it in the corosync.conf on all 4 nodes ?

many thx for your input
 
you are right
it uses the public ip network
is enough if i change it in the corosync.conf on all 4 nodes ?

many thx for your input
Yes, it should be fine if you just enter the IP into the Corosync Configuration from the nodes on the respective NIC you want to use. In the web UI you can do this. You might have to edit `/etc/hosts` as well, but I am not 100% sure.
 
Yes, it should be fine if you just enter the IP into the Corosync Configuration from the nodes on the respective NIC you want to use. In the web UI you can do this. You might have to edit `/etc/hosts` as well, but I am not 100% sure.
it fug** up my whole cluster.
the monitors never come up again. i spend my whole time until 22 evening to fix it. i was feed up and i just rebooted everything. 2 reboots and everything was working again
 
i recognized a huge performance impact
one thought is the name resolution

in the hostfile is written

public_ip hvirt01.domain.com hvirt01

but the new configuration is on 10.10.12.10-14
the name is resolved in "public_ip" and not the 10er network
but the cluster needs resolve hostname (as i have read)

how can i solve it ?

btw: i have open another question about the performance in generally (ceph)
 
it fug** up my whole cluster.
the monitors never come up again. i spend my whole time until 22 evening to fix it. i was feed up and i just rebooted everything. 2 reboots and everything was working again
Oh no, bad to hear. Did you replace the IP or add an additional one? Replacing it might be problematic, I should have been clearer - sorry.

It should just suffice to replace the public IP in the hosts file with your 10.x.x.x IP then it should resolve correctly (if i understand correctly).
 
Oh no, bad to hear. Did you replace the IP or add an additional one? Replacing it might be problematic, I should have been clearer - sorry.

It should just suffice to replace the public IP in the hosts file with your 10.x.x.x IP then it should resolve correctly (if i understand correctly).
its not completely correct. the name resolution is the issue, even its based on IP.
i replaced the external ip with the internal, i spend hours to get the cluster again up and running. what finally fixed it was 2 reboots. strange

anyway. it is working again. i just have 2 more issues to solve than i am pretty happy with proxmox cluster
 
Last edited:
Yes, when messing with network configuration certain changes require a reboot. You can also try using systemctl reload networking, although it might not always work 100%.