[SOLVED] Distributed network in a cluster

tyxel

New Member
Mar 1, 2018
5
0
1
39
Greetings,

I have 3 node cluster setup and I would like to have a private network between several VMs on different hosts.
First thing I did was to create a Linux bridge (vmbr1) via GUI on all nodes without assigning an IP address and this worked well as long as the VMs were on the same host.
What would I like to achieve is some kind of distribuded switch on each host that would be reachable from all other hosts.
I found some OpenVSwitch how-tos but they seem overly complicated and not actually easy to follow for what I am trying to acheive here.

Could somebody please point me in the right direction?
Otherwise I could just run all VMs which need to communicate together on a single host but I would like to avoid this.

Running Proxmox VE 5.4

Thanks!

My current simple setup (Proxmox nodes IPs: 10.97.48.1, 10.97.48.2, 10.97.48.3):

Code:
auto lo
iface lo inet loopback
iface ens2f0 inet manual
iface ens2f1 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves ens2f0 ens2f1
        bond-miimon 100
        bond-mode balance-alb
#Primary NW

auto vmbr0
iface vmbr0 inet static
        address  10.97.48.1
        netmask  255.255.0.0
        gateway  10.97.0.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
 
Last edited:
Greetings,

I have 3 node cluster setup and I would like to have a private network between several VMs on different hosts.
First thing I did was to create a Linux bridge (vmbr1) via GUI on all nodes without assigning an IP address and this worked well as long as the VMs were on the same host.
What would I like to achieve is some kind of distribuded switch on each host that would be reachable from all other hosts.
I found some OpenVSwitch how-tos but they seem overly complicated and not actually easy to follow for what I am trying to acheive here.

Could somebody please point me in the right direction?
Otherwise I could just run all VMs which need to communicate together on a single host but I would like to avoid this.

Running Proxmox VE 5.4

Thanks!

My current simple setup (Proxmox nodes IPs: 10.97.48.1, 10.97.48.2, 10.97.48.3):

Code:
auto lo
iface lo inet loopback
iface ens2f0 inet manual
iface ens2f1 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves ens2f0 ens2f1
        bond-miimon 100
        bond-mode balance-alb
#Primary NW

auto vmbr0
iface vmbr0 inet static
        address  10.97.48.1
        netmask  255.255.0.0
        gateway  10.97.0.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0


Even vmbr1 can not be seen (yet) in the above configuration I assume it's defined without any physical port.

There are different possibilities to achieve your request. IMHO the traditional way is the following:

- add to vmbr1-s IP addresses

- define different subnets in the 3 node

- set routing tables (in both hosts and VMs) for connecting between subnets

That solution has some disadvantages as there are less flexibility, additional routing tables etc. On the other hans such a solution works in any case.

If your network (switches) allows it you can define a vlan at vmbr0 and assign the VM NICs of the special subnet to vmbr0 with the respective vlan tag (no vmbr1 needed then).

And finally you can use openvpn between the nodes with tap devices and bridge them to vmbr1.
 
Using VLANs is actually what I tried to do first, so maybe I'm overcomplicating things here.
Basically what I'm trying to acheive is to have some VMs separated from rest of the environment as these will be in DMZ accessible from the internet.
What I did was I've set all vmbr0 interfaces on all Proxmox nodes as VLAN aware, then created 2 VMs with network interfaces attached to vmbr0 and used IPs from different subnet (10.98.0.0/16) and assigned a VLAN tag (let's say 100) but this did not work and the VMs could not communicate with each other if they were running on different nodes

If your network (switches) allows it you can define a vlan at vmbr0 and assign the VM NICs of the special subnet to vmbr0 with the respective vlan tag (no vmbr1 needed then).

Switch is Catalyst 3750 so it definitely does support VLANs, dot1q and trunks are configured on physical ports to which vmbr0 interfaces are connected, am I missing someting else here?
 
Last edited:
Switch is Catalyst 3750 so it definitely does support VLANs, dot1q and trunks are configured on physical ports to which vmbr0 interfaces are connected, am I missing someting else here?

Since I am not a Cisco expert I don't know it for sure, but default is probably to NOT support vlan tagged traffic. Switched I use request to specify for each port which vlan tags are allowed, otherwise packets will be dropped without any further notice.
 
I finally managed to get the VLANs to work, it was an issue with switch configuration, then just used vmbr0 as an interface my VMs connect to and set VLAN tag 100 and connectivity works as expected even if VMs are on different nodes.

In any case thanks for helping out!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!