Hello all,
I am new here in the forum, as I am slowly reaching my absolute limits with my Proxmox network settings.
I have the following setup as dedicated Server at Hetzner:
3x dedicated servers with additional NIC on one switch, for the Ceph cluster.
The servers are connected to the networks 4001,4002,4003 via the vSwitches.
The /etc/network/interfaces of Debian 12 looks like this:
On the Debian VMs of the respective Proxmox nodes, the whole thing looks like this:
The strange thing is that the connection is stable when it is established and all VMs can ping each other.
If I now wait a while or restart a Proxmox node together with the VMs, I can no longer reach all the targets from the hosts.
The natting works continuously and all VMs are supplied with internet despite the one public IP per node.
I would be really grateful for any ideas as to what else it could be.
Greetings JohnBoyB.
I am new here in the forum, as I am slowly reaching my absolute limits with my Proxmox network settings.
I have the following setup as dedicated Server at Hetzner:
3x dedicated servers with additional NIC on one switch, for the Ceph cluster.
The servers are connected to the networks 4001,4002,4003 via the vSwitches.
The /etc/network/interfaces of Debian 12 looks like this:
Code:
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
iface lo inet6 loopback
# Nic with public IP from Hetzner
auto enp5s0
iface enp5s0 inet static
address 144.x.x.x/27
gateway 144.x.x.x
hwaddress c8:cc:f8:38:91:c5
up route add -net 144.x.x.x netmask 255.255.255.224 gw 144.x.x.x dev enp5s0
### vlans from hetzner vswitch
auto enp5s0.4001
iface enp5s0.4001 inet manual
mtu 1400
#proxmox
auto enp5s0.4002
iface enp5s0.4002 inet manual
mtu 1400
#kubernetes
auto enp5s0.4003
iface enp5s0.4003 inet manual
mtu 1400
#nat fot internet on host machines
auto vmbr4001
iface vmbr4001 inet static
address 10.53.8.12/24
bridge-ports enp5s0.4001
bridge-stp off
bridge-fd 0
mtu 1400
#proxmox
auto vmbr4002
iface vmbr4002 inet static
address 10.45.82.12/24
bridge-ports enp5s0.4002
bridge-stp off
bridge-fd 0
mtu 1400
#kubernetes
auto vmbr4003
iface vmbr4003 inet static
address 10.10.10.12/24
bridge-ports enp5s0.4003
bridge-stp off
bridge-fd 0
mtu 1400
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o enp5s0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o enp5s0 -j MASQUERADE
#nat
# seperates Networkinterface für CEPH Cluster
auto enp7s0
iface enp7s0 inet manual
auto vmbr4000
iface vmbr4000 inet static
address 10.87.114.12/24
bridge-ports enp7s0
bridge-stp off
bridge-fd 0
#ceph
On the Debian VMs of the respective Proxmox nodes, the whole thing looks like this:
Code:
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug ens18
iface ens18 inet static
address 10.45.82.12/24
up ip route add 10.45.0.0/16 via 10.45.82.1
mtu 1400
auto ens19
iface ens19 inet static
address 10.10.10.111/24
gateway 10.10.10.11
mtu 1400
The strange thing is that the connection is stable when it is established and all VMs can ping each other.
If I now wait a while or restart a Proxmox node together with the VMs, I can no longer reach all the targets from the hosts.
The natting works continuously and all VMs are supplied with internet despite the one public IP per node.
I would be really grateful for any ideas as to what else it could be.
Greetings JohnBoyB.