Routed mode with Bonding and VLANs - support needed

smaxxx

New Member
Aug 18, 2023
3
0
1
Hi,

I am trying to configure my Proxmox VE server to use Bonding, VLANs and make the VMs/CTs run in "routed" mode.

I have successfully configured the Proxmox VE host to use bonding and VLAN on the host. However, I am currently stuck configuring vmbr0 interface for routed mode.

I have used following resources:
https://pve.proxmox.com/wiki/Network_Configuration
https://help.ovhcloud.com/csm/en-de...?id=kb_article_view&sysparm_article=KB0043913
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_network_vlan

First of all, my relevant interfaces and IPs:
eno1, eno2 are physical interfaces that are bonded together. eno2 is the master. The bond itself is running in active-backup mode.

The bond has VLAN 10 configured, with the public IP of the host itself. Let's say in this example the host's public IP and subnet is "123.123.123.123/22" and the gateway is "123.123.123.1".

This part works great. The host has connection and the VLAN and the failover works.

Now comes the part where I am stuck.

I am trying to configure my VMs / CTs so they run in the routed mode and not use bridging at all.

The VMs / CTs run in the same subnet in the host, but could also use other subnets as well (but for reducing complexity, lets stick to the network above).

I tried to configure it like in
https://help.ovhcloud.com/csm/en-de...?id=kb_article_view&sysparm_article=KB0043913.
1692610434505.png

First issue:

If I configure it via vmbr0, the CTs can not reach the gateway 192.168.0.1. Then I tried to configure vmbr0v10 - I can not even choose this Bridge in the CT configuration.

I do not want to configure the VLANs on the CT itself.

My /etc/network/interfaces:

Bash:
auto lo
iface lo inet loopback
        up echo "1" > /proc/sys/net/ipv4/ip_forward
        up echo "1" > /proc/sys/net/ipv4/conf/bond0/proxy_arp
        up echo "1" > /proc/sys/net/ipv4/conf/bond0.10/proxy_arp

auto eno1
iface eno1 inet manual
        bond-master bond0

auto eno2
iface eno2 inet manual
        bond-master bond0

iface ens2f0 inet manual

iface ens2f1 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves eno1 eno2
        bond-miimon 100
        bond-mode active-backup
        bond-primary eno2
        bond-downdelay 400
        bond-updelay 10000

auto bond0.10
iface bond0.10 inet static
        address 123.123.123.123/22
        gateway 123.123.123.1

# My first try - I can select the bridge in the VM, but the CTs have no connection to the gateway - the host can reach the CTs though over vmbr0.
#auto vmbr0
#iface vmbr0 inet static
#       address 192.168.0.1/24
#       bridge-ports none
#       bridge-stp off
#       bridge-fd 0
#       up ip route add 123.123.120.0/22 dev vmbr0

# My second try. Can not even select the bridge in the VM
auto vmbr0v10
iface vmbr0v10 inet static
        address 192.168.0.1/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        up ip route add 123.123.120.0/22 dev vmbr0v10

With the first configuration (vmbr0) I was able to reach the CT from the host using `ping -I vmbr0 CT_IP`. But the CT was not able to ping 192.168.0.1.

The CT network configuration was this:
Code:
IP: public ip of the CT, e.g. 123.123.120.25/32
Gateway: 192.168.0.1
Bridge: vmbr0

What am I missing? Can anyone point me into the right direction? Is this setup too complex and should I perhaps use Open vSwitch? Any advice? Thank you!
 
Are you actually leveraging VLANs in this environment or are you just trying to create a management interface that is on a specific port? I'm also curious on how this is routing externally for your route to be discovered for passing traffic.
 
Hi,
we are actually leveraging VLANs currently in our network, yes.
bond0 (eno1 and eno2) go to separate switches. Both interfaces on the switches are configured as trunk interface.

ens2f0 will be meant for internal / management purposes, but I haven't configured that yet.
 
But are you telling your external router how to get to that IP address range? Address 123.123.123.123/22 range is known by the router but if you are trying to explicitly tell that it should be reached via 192.168.0.1/24 that needs to be setup. It won't just automatically route to that IP because of your configuration on a host node.

Also, I should note that in:
Code:
auto bond0.10
iface bond0.10 inet static
        address 123.123.123.123/22
        gateway 123.123.123.1

You would likely want to use an address that isn't in the range that you are trying to push to your routed VM's.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!