Hey! I'm trying to setup Proxmox using a bridged setup on a Hetzner dedicated server (I followed the official guidelines: https://community.hetzner.com/tutorials/install-and-configure-proxmox_ve#22-bridged-setup).
The Hetzner firewall is configured to allow all in both outgoing and incoming directions.
The Proxmox firewall is also enabled and allows all outgoing connections on all levels (datacenter, node, and VM) and incoming SSH, PING, DNS, NTP, and 8006 connections on all levels. Disabling the firewall makes no difference.
From the VMs that are behind a bridged (MASQUARADE) NAT, I can reach the internet, however, with an additional IP with an assigned MAC address, the VM cannot reach the internet but I can reach the VM.
Network configuration on the HOST/PVE Node:
The routes on the host:
The guests are running CentOS Stream 10.
The MAC address of the guest is set to the MAC address specified in the web interface (robot).
The
DNS is also configured on the guests via cloud-init:
Removing the second route to the internal subnet (
The first route is setup via cloud-init and dhcp. It's using the same parameters as are shown in the web interface (robot) and it doesn't make any difference if I configured it statically or via dhcp.
I can ping the other (private) VMs via their private IPs (eg. 10.10.0.2, or the gateway 10.10.0.1). I can also ping the host via it's public ip address. I can't ping the gateway from the guest (the gateway of the guest is equivalent to the gateway on the host). I can reach the gateway on the host and the host has internet connection.
I can also ping the public guest VM from my local machine via it's public IP, and I can also SSH in.
Funnily enough, on the private VMs (without a public IP), I can reach the internet.
I could now make the default route on the public guest VM go via the NAT but I don't think that's exactly the best solution, or is that expected?
Thank you!
The Hetzner firewall is configured to allow all in both outgoing and incoming directions.
The Proxmox firewall is also enabled and allows all outgoing connections on all levels (datacenter, node, and VM) and incoming SSH, PING, DNS, NTP, and 8006 connections on all levels. Disabling the firewall makes no difference.
From the VMs that are behind a bridged (MASQUARADE) NAT, I can reach the internet, however, with an additional IP with an assigned MAC address, the VM cannot reach the internet but I can reach the VM.
Code:
net0: virtio=00:11:22:33:44:55,bridge=vmbr0,firewall=1
Network configuration on the HOST/PVE Node:
Code:
# /etc/network/interfaces on HOST
iface nic0 inet manual
auto vmbr0
iface vmbr0 inet static
address <MAIN-IP>/26 # assigned main ip
gateway <MAIN-GATEWAY> # from hetzner robot
broadcast xxx.xxx.xxx.xxx # from hetzner robot
pointopoint xxx.xxx.xxx.xxx # same as gateway from hetzner robot
bridge-hw nic0
bridge-ports nic0
bridge-stp off
bridge-fd 0
bridge-hello 2
bridge-maxage 12
bridge-disable-mac-learning 1
auto vmbr1
iface vmbr1 inet static
address 10.10.0.1/26
bridge-ports none
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward # confirmed to be set to 1
post-up iptables -t nat -A POSTROUTING -s 10.10.0.0/26 -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s 10.10.0.0/26 -o vmbr0 -j MASQUERADE
post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
The routes on the host:
Code:
root@HOST# ip r
default via <MAIN-GATEWAY> dev vmbr0 proto kernel onlink
10.10.0.0/26 dev vmbr1 proto kernel scope link src 10.10.0.1
<MAIN-SUBNET>/26 dev vmbr0 proto kernel scope link src <MAIN-IP>
root@HOST# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
root@HOST# sysctl net.ipv6.conf.all.forwarding
net.ipv6.conf.all.forwarding = 1
The guests are running CentOS Stream 10.
The MAC address of the guest is set to the MAC address specified in the web interface (robot).
Code:
[ansible@GUEST-VM-PUBLIC ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:11:22:33:44:55 brd ff:ff:ff:ff:ff:ff
altname enp6s18
altname enx005056006988
inet <ADDITIONAL-IP>/26 brd <ADDITIONAL-IP-BROADCAST> scope global dynamic noprefixroute eth0
valid_lft 41120sec preferred_lft 41120sec
inet6 fe80::250:56ff:fe00:6988/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether bc:24:11:86:56:ca brd ff:ff:ff:ff:ff:ff
altname enp6s19
altname enxbc24118656ca
inet 10.10.0.5/26 brd 10.10.0.63 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
inet6 fe80::be24:11ff:fe86:56ca/64 scope link noprefixroute
valid_lft forever preferred_lft forever
Code:
[ansible@GUEST-VM-PUBLIC-01 ~]$ ip r
default via <GATEWAY-OF-ADDITIONAL-IP> dev eth0 proto dhcp src <ADDITIONAL-IP> metric 100
default via 10.10.0.1 dev eth1 proto static metric 101
10.10.0.0/26 dev eth1 proto kernel scope link src 10.10.0.5 metric 101
<ADDITIONAL-IP-SUBNET>/26 dev eth0 proto kernel scope link src <ADDITIONAL-IP> metric 100
The
<ADDITIONAL-IP-SUBNET> is a xxx.xxx.xxx.128 address, while the <GATEWAY-OF-ADDITIONAL-IP> is a xxx.xxx.xxx.129.DNS is also configured on the guests via cloud-init:
Code:
[ansible@GUEST-VM-PUBLIC-01 ~]$ cat /etc/resolv.conf
# Generated by NetworkManager
search 167.34.243.136.clients.your-server.de
nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 185.12.64.1
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver 185.12.64.2
Removing the second route to the internal subnet (
default via 10.10.0.1) isn't changing anything.The first route is setup via cloud-init and dhcp. It's using the same parameters as are shown in the web interface (robot) and it doesn't make any difference if I configured it statically or via dhcp.
I can ping the other (private) VMs via their private IPs (eg. 10.10.0.2, or the gateway 10.10.0.1). I can also ping the host via it's public ip address. I can't ping the gateway from the guest (the gateway of the guest is equivalent to the gateway on the host). I can reach the gateway on the host and the host has internet connection.
I can also ping the public guest VM from my local machine via it's public IP, and I can also SSH in.
Funnily enough, on the private VMs (without a public IP), I can reach the internet.
I could now make the default route on the public guest VM go via the NAT but I don't think that's exactly the best solution, or is that expected?
Thank you!
Last edited: