Private Ethernet Networks in a proxmox cluster

adoII

Renowned Member
Jan 28, 2010
174
17
83
Hi,

I have a cluster of 3 proxmox servers which hosts virtual Linux machines.

3 Linux vms need to be interconnected via an additional virtual nic in a private 10.x.x.x network.

As long as I host the 3 vms on the same proxmox node this is no problem, I define a "blind bridge" without any physical nic and assign all 3 vms a second nic which is connected to that blind bridge.

But now I want to move the 3 vms to 3 different proxmox nodes and still maintain a virtual ethernet network between them. So I either need to create blind bridges on each proxmox node and interconnect them somehow or I need to interconnect them by another method.

I once worked with vde2 virtual distributed ethernet and tunneled ethernet frames via ssh but that was not very solid and I did not really like that setup.

Is there another idea of how I could define arbitrary logical ethernet networks between vms on different proxmox nodes ?

Thanks
adoII
 
if you have a second interface on the physical machine, add this to the brigde and connect the interfaces with a switch. if you have switch which has support for vlans, create a tagged vlan on the switch and add the vlan-interface to the bridge.
if you want to use migration, make sure that you have the same networkconfiguration on all physical servers.

Sven
 
Hmm, sounds good. I don't have a free nic but I have a bonding device for the storage network where I might be able to add vlans.
Unfortunately it seems not possible to add vlans to bonding devices.
I tried this:
auto bond0.111
iface bond0.111 inet static
address 10.65.1.12
netmask 255.255.255.0
slaves eth1 eth2
bond_miimon 100
bond_mode 802.3ad

but it ends up with a message:
~# ifup bond0.111
Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config
ERROR: trying to add VLAN #111 to IF -:bond0:- error: Operation not supported

Any ideas about how to add vlans to bonding interfaces ?
 
I finally got it working, the /etc/network/interfaces file has to look someting like this:
auto bond0
iface bond0 inet manual
slaves eth4 eth5
bond_miimon 100
bond_mode 802.3ad

auto vmbr1
iface vmbr1 inet manual
bridge_ports bond0.112
bridge_stp off
bridge_fd 0
post-up ifconfig bond0.112 up
 
I never liked the idea of shoving the Proxmox web-console to the internet so we have always had minimum of two NIC's on every server. One for the Net/DMZ and one for the LAN. The LAN interface has the console so it should be fairly safe way to do business. We are also in the process of hardening some of the more important LAMPs behind a firewall. (Always a pain to maintain the NAT rules.)

This has the small additional problem of not being able to use Proxmox's nice OpenVz IP management (I really like it.) All the VM's interfaces have to be 'Bridged Ethernet Devices' - so that we can define what NIC to use - which makes management marginally more complex. (Just like normal server's or KVM's.)

It's always a compromise. More security means more hassle and more work. I would like to see OpenVZ IP management console to be able to select the NIC to be used. This would be 'major' improvement and save us much time on management.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!