Allowing LXC with private IP only to access the Internet on cluster with traditional bridges and VLAN on Hetzner

jsabater

Member
Oct 25, 2021
130
14
23
49
Palma, Mallorca, Spain
Hello everyone!

I have a Proxmox 7.1 cluster with single-NIC nodes running on Hetzner with the following network interfaces:
  1. Ethernet device eno1: Public IP address assigned to the server when ordered.
  2. Bridge vmbr4001: Holds the public subnet assigned to the vSwitch with id 4001 (so that guests with public IPs can be migrated among nodes).
  3. Bridge vmbr4002 (192.168.0.x/24): Used for communication among guests through vSwitch with id 4002 (e.g. Nginx acting as proxy for a Minio server).
  4. VLAN eno1.4003 (192.168.1.0/24): Used for communication among nodes of the cluster through vSwitch with id 4003 (Corosync, SSH, etc).
My /etc/hosts is as follows:

Code:
192.168.1.11 proxmox1.example.com proxmox1
192.168.1.12 proxmox2.example.com proxmox2

Only hosts (i.e. nodes) have visibility over the 192.168.1.0/24 network since it's used only for the cluster nodes to talk to each other.

All guests are LXC running Debian Bullseye. Two types:

1. With private IP only (192.168.0.<id>/24).
2. With private IP (net0) and public IP (net1).

net0 is always the private IP and uses vmbr4002.
net1 is always the public IP (when needed) and uses vmbr4001.

This is my /etc/network/interfaces:

Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet static
        hwaddress aa:bb:cc:dd:ee:ff
        address x.y.z.182/24
        gateway x.y.z.129
        pointopoint x.y.z.129
# Proxmox host

iface eno1.4001 inet manual
        mtu 1400

auto vmbr4001
iface vmbr4001 inet manual
        bridge-ports eno1.4001
        bridge-stp off
        bridge-fd 0
        mtu 1400
# Proxmox guests public network

iface eno1.4002 inet manual
        mtu 1400

auto vmbr4002
iface vmbr4002 inet static
        bridge-ports eno1.4002
        bridge-stp off
        bridge-fd 0
        mtu 1400
# Proxmox guests private network 192.168.0.0/24

auto eno1.4003
iface eno1.4003 inet static
        address 192.168.1.11/24
        vlan-raw-device eno1
        mtu 1400
# Proxmox hosts private network 192.168.1.0/24

This configuration works fine execept for the fact that guests with only a private IP address cannot access the Internet (e.g. apt-get update). I am now in the process of adding a third node and I am working with this (temporary) network configuration:

Code:
auto lo
iface lo inet loopback

auto enp0s31f6
iface enp0s31f6 inet static
        hwaddress aa:bb:cc:dd:ee:ff
        address x.y.z.45/27
        gateway x.y.z.33
        pointopoint x.y.z.129
# Proxmox host

iface enp0s31f6.4001 inet manual
        mtu 1400

auto vmbr4001
iface vmbr4001 inet manual
        bridge-ports enp0s31f6.4001
        bridge-stp off
        bridge-fd 0
        mtu 1400
        bridge-disable-mac-learning 1
# Proxmox guests public network

iface enp0s31f6.4002 inet manual
        mtu 1400

auto vmbr4002
iface vmbr4002 inet static
        address 192.168.0.13/24
        bridge-ports enp0s31f6.4002
        bridge-stp off
        bridge-fd 0
        mtu 1400
        # Enable routing
        post-up echo "1" > /proc/sys/net/ipv4/ip_forward
        # Add rule to rewrite (masquerade) outoing packets from vmbr4002
        # to appear as coming from the IP address of <out-interface>
        post-up   iptables --table nat --append POSTROUTING --source '192.168.0.0/24' --out-interface enp0s31f6 --jump MASQUERADE
        post-down iptables --table nat --delete POSTROUTING --source '192.168.0.0/24' --out-interface enp0s31f6 --jump MASQUERADE
        # Following rules are needed as we are using PVE Firewall
        # https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration
        post-up   iptables --table raw --insert PREROUTING --in-interface fwbr+ --jump CT --zone 1
        post-down iptables --table raw --insert PREROUTING --in-interface fwbr+ --jump CT --zone 1
# Proxmox guests private network 192.168.0.0/24

auto enp0s31f6.4003
iface enp0s31f6.4003 inet static
        address 192.168.1.13/24
        vlan-raw-device enp0s31f6
        mtu 1400
# Proxmox hosts private network 192.168.1.0/24

This allows me:
  1. To have any LXC in the cluster with just a private IP (e.g. PostgreSQL server with 192.168.0.100) to have 192.168.0.13 as gateway to run apt-get update. And when migrated, thanks to the vSwitch, it will work on the new node with no changes required (albeit jumping through the vSwitch, yes, but it's just to upgrade packages).
  2. To have LXC in the cluster with both a private IP and a public IP address (e.g. Nginx server) using separate bridges (vmbr4002 and vmbr4001, respectively), which helps with clarity in my opinion.
Proxmox LXC UI still doesn't support setting the MTU of a network interface. This I solve by editing the file /etc/pve/lxc/id.conf and adding mtu=1400 at the end of the net0 and net1 configuration lines.

Proxmox Firewall UI still doesn't support setting NAT/MASQUERADE rules in the cluster or the node. This I solve via post-up and post-down rules in the network configuration of the hosts.

I would like to know whether "this is the way" or whether there's another technique I don't know about which could improve the setup. That is, be as clear, simple, explicit and straightforward as possible, allowing migration of LXC from node to node with no changes in the configuration.
 
Last edited:
So I have been suggested by a user on Reddit that I may not even need to configure a gateway in the LXC that require just a private IP address if I were to use the following:
  1. apt-cache-ng, an APT proxy. Will set it up soon, as well as an internal DNS using pDNS.
  2. pbs-client instead of scp to copy the backups. Ready to use that since my PBS server is already in the hosts local network 192.168.1.0/24..
Only thing I would also require is to find a way for developers to somehow directly connect to an LXC via an OpenSSH connection to one of the hosts, or a proxy LXC with a public IP address. I will investigate this.