[SOLVED] Proxmox VE 6.2 - routed setup - bridge wont do his job

Frankenstein

New Member
Sep 22, 2020
9
0
1
30
Hey Guys,

actually setting up a dedicated from Hetzner with the installimage script from the rescue system to install debian buster proxmox ve. The setup was no problem and the panel reachable. We want to setup a routed mode with 2 additional IP addresses, based from the same subnet but buyed as single-ips, later activate and configure ipv6.

After a short setup from SSH with pubkey we configured the /etc/network/interfaces of the proxmox host like following:

Bash:
auto eno1
iface eno1 inet static
        address  <IP-Hypervisor>
        netmask  255.255.255.192
        gateway  <IP-Gateway>
    pointopoint <IP-Gateway>

auto vmbr0
iface vmbr0 inet static
    address  <IP-Hypervisor>
    netmask  255.255.255.192
    bridge_ports none
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0
        up ip route add <First-Additional-IP>/26 dev vmbr0
        up ip route add <Second-Additional-IP>/26 dev vmbr0

The guest system has the following /etc/network/interfaces:
Bash:
auto eth0
iface eth0 inet static
        address <First-Additional-IP>
        netmask 255.255.255.192
        pointopoint <IP-Hypervisor>
        gateway <IP-Hypervisor>

IP Forwarding is default turned on and also from additional /etc/sysctl.d/99-hetzner.conf. IPv6 is disabled completly for the first, until IPv4 running. Further, we didnt using the Proxmox Firewall, didnt using the Hetzner Firewall and also didnt using a vSwitch yet.


Troubleshooting:

  • While taking tcpdump whe can recognize, that the guest system sending packages on a ping over the vmbr0, but its didnt reach the eno1 on the host.
  • While taking tcpdump whe can recognize, that the host system receiving packets on a ping over the eno1, but its didnt reach the vmbr0
  • tried to configure the additional ip addresses as /32 cidr and ip adress of server like default as /26
  • tried to configure the additional ip addresses as /32 cidr and ip address of server also as /32 cidr
  • set net.ipv4.conf.eno1.proxy_arp net.ipv4.conf.vmbr0.proxy_arp net.ipv4.conf.fwln200i0.proxy_arp further net.ipv4.conf.all.proxy_arp to 1
  • tried to setup routes from eno1 to vmbr0 from 0.0.0.0 to <First-Additional-IP> and otherwise from VM eth0 to vmbr0
  • tried to define the <IP-Hypervisor> as gateway from vmbr0


I would be interested in:
  • A solution for the problem inclusive explaining for the next time ;)
  • Best practices for the wished End result named after this
  • If your a lover a graphical overview over the best practive :D (just a joke)
I wouldnt be interested in:
  • bridge mode (only if u can give me a really really really good reason for it and more than "its easy to configure")

But i'll take everything you would give me to get the damn bridge working after 4 days of breaking the head on my desk.

Wished resultat for now:
  • Working Proxmox VE routed setup with 2 additional ip addresses


Wished end resultat:
  • Working Proxmox VE
  • behind OPNSense Gateway on VM on Hypervisor
  • with additional VM for VPN for external access to the Proxmox Interface from working spaces
  • and a deactivated public interface for the Hypervisor
  • Later a Mailserver behind Mail Server Gateway (Proxmox) and LDAP Server
  • where Mailserver and LDAP Server shouldnt be public (Just over OPNSense and/or Mail Gateway)
  • But didnt know if this is a best practice
  • Later we would took a master-master replica and then later a cluster out of it

Best regards



//Solution found

We have ordered additional MAC Addresses to the IP addresses - after removing them, the issue was solved and the machine was able to connect to the world :)
 
Last edited: