Using different subnets on the same physical interface - how?

gkovacs

Renowned Member
Dec 22, 2008
512
50
93
Budapest, Hungary
My current cluster nodes are all configured like this:
Code:
[firewall VM]  + -----------[vmbr1] ----------- [eth1] ---------- Internet
               + ---------- [vmbr0] ----------- [eth0] ---------- LAN

[all other VM] + ---------- [vmbr0] ----------- [eth0] ---------- LAN

Now I want to create another LAN, with a different IP subnet to connect the firewall VMs for dedicated (intra-firewall) traffic, but without using another physical interface.

- Is it possible to solve this by creating a new network card for the firewall VM's and connect them via VLAN tagging?

- Do I need to create a different (VLAN bridge) as well?
- Do I need to setup the VLAN on the physical switch as well?

Some other questions (if I decide to create a dedicated bridge for this subnet):

- Can bridges be daisy-chained? (Port for vmbr2 is vmbr0)
- Can bridges be connected to the same NIC? (Port for both vmbr2 and vmbr0 is eth0)
 
Last edited:
Hello gkovacs

- Is it possible to solve this by creating a new network card for the firewall VM's and connect them via VLAN tagging?
Yes - suggestion below
- Do I need to create a different (VLAN bridge) as well?
No - but recommended to change to ovs (if you don´t use it yet)
- Do I need to setup the VLAN on the physical switch as well?
No

- Can bridges be daisy-chained? (Port for vmbr2 is vmbr0)
Yes - but usually not necessary
- Can bridges be connected to the same NIC? (Port for both vmbr2 and vmbr0 is eth0)
No - even it is possible to assign a NIC to an ovs and a linux-bridge at the same time - but it is a bug and dangerous!


What I understood:

A second LAN should be created with connections to some VMs in the cluster in different nodes.

The existing LAN connects the nodes from the cluster physically by eth0.

I would recommend to use openvswitch (ovs), it is much more flexible than linux-bridges. If you have only VMs (and not containers) all can be done by GUI (in case of containers a few simple CLI commands may be necessary).

Define the new LAN as VLAN. On host´s physical interface (eth0) it is tagged, but the connection to the VMs can be tagged or untagged

tagged:
-------

- no additional ports (on the host virtual switch) or virtual NICs (in VMs)
- the VMs must be able to select the VLAN

untagged:
---------

- on the virtual switch define a port for the respective vlan
- in the VM it is an additional virtual NIC - seen as a physical one in the VM and it does not need to know anything about VLAN


kind regards

Mr.Holmes
 
Thanks for your help Mr.Holmes! Indeed my goal is to create another network (with a different IP subnet) between PVE cluster nodes that only some VMs connect to, and this network should use the same physical NIC (eth0) as the default one.

If I understood your post correctly, I need to change vmbr0 (the current LAN bridge on all cluster nodes) from a Linux bridge to OpenVSwitch. This might present a problem, as I wanted to accomplish the above without disturbing the current network operation. Do you think it's possible somehow (plugging an OVS bridge into the current Linux bridge), or do I need OVS bridges on all cluster nodes directly connected to eth0 for the VLAN to work?
 
Last edited:
I need to change vmbr0 (the current LAN bridge on all cluster nodes) from a Linux bridge to OpenVSwitch. This might present a problem, as I wanted to accomplish the above without disturbing the current network operation.

A change to OVS without reboot of both host and VM is possible. In that case you have to
- prepare /etc/network/interfaces.new (possible with GUI)
- release linux-bridge from current VMs (brctl command)
- stop network with /etc/init.d/networking stop
- copy /etc/network/interfaces.new to /etc/network/interfaces
- start network with /etc/init.d/networking start
- connect ovs-bridge to current VMs (ovs-vsctl command)
Of course you can make the above via shell script - the process for change will take only a few seconds (recommended of course to check it first at a test environment).

BUT: precondition is that openvswitch-switch (version <= 2.0.90) is installed on the host. Usually I made a reboot after install to set up OVS database and start services properly - don´t know if it works without reboot too (should, if you init database and start all necessary services manually - study OVS documentation).



(plugging an OVS bridge into the current Linux bridge), or do I need OVS bridges on all cluster nodes directly connected to eth0 for the VLAN to work?

Not recommended! And not necessary.

do I need OVS bridges on all cluster nodes directly connected to eth0 for the VLAN to work?

As consequence from the above: Yes

Additional remark: If you want to avoid VMs using "basic LAN" only can access the new VLAN too it´s recommended to change the "basic LAN" also to a VLAN (otherwise a linux expert can easily access to each VLAN at the physical LAN). The change from LAN to VLAN for the "basic LAN" can be of course also made in the above proposed script.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!