vlan routing

Munir Nassar

New Member
May 23, 2019
4
0
1
43
I am standing up a small 3-node cluster to potentially replace a much larger VMWare installation. I am trying to recreate the same network configuration as in vSphere so i have installed openvswitch for its advanced capabilities. Each host has two NICs, connected to independent switches. The switches are identically configured except for the iSCSI vlans which are unique to each switch. Management(untagged) and VM Network traffic(tagged variously) can use either switch for redundancy, however the iSCSI vlans must use specific NICs for iSCSI traffic.

Can someone help me get the right OVS configuration to get this going? My best results were bonding eno1,eno2, but half of the iSCSI traffic would drop.
 

Attachments

  • interfaces.txt
    1.2 KB · Views: 18
I am trying to recreate the same network configuration as in vSphere so i have installed openvswitch for its advanced capabilities.
What "advanced" capabilities does your setup need, that is no already working with the linux bridge? Asking, because the linux bridge is easier to setup and supports VLAN too.
 
I have iSCSI vlans that are unique to the uplink switches, iSCSIA(vlan100) on one switch and iSCSIB(vlan200) on the other switch. everything else(management and VM traffic) is active-passive on the switches
 
Just wondering if I understand correctly.

You have 2 x 10GB NICs is that right ? and are looking for a way to share the 2 NICs like a vDS - virtual distributed switch?

I’m not sure if ovs supports shares to limit the amount of throughput like in a vDS so the Link doesn’t become saturated.

Would be good to confirm this first as the link could easily become flooded which means no QOs per vlan and degraded performance.

Will be following this thread more closely :)
 
not even a vDS. with vSphere vSwitch you can connect to multiple uplink switches and assign priority, all-active, active and fallback or just one active and re remainer are unused. This can be further modified so that individual vmks(like the ones assigned to the software iSCSI HBA) override the defaults and are assigned a particilar uplink. I want to be able to reproduce that with OVS.

Attached are the config screens from vSphere.
 

Attachments

  • vSwitch-teaming.png
    vSwitch-teaming.png
    49.9 KB · Views: 13
  • vSwitch0-iSCSI.png
    vSwitch0-iSCSI.png
    54.4 KB · Views: 10
  • vSwitch0-vlan102.png
    vSwitch0-vlan102.png
    50.2 KB · Views: 10
Last edited:
Ok so vDS just uses a few different types of bonding which VMware have created nice names for :) but where originally all open source as they didn't invent the wheel.

OVS and vDS can both bond links naively
Both can be set to Active Active
Both can be set to Active Passive (Failover)
Both can be used with LACP where the switch controls Active ports, all ports that are marked for LACP can be switched on/ off as traffic increases/ decreases to accommodate for more bandwidth etc.

see this OVS article below all the available bonding types for more information to choose your binding method.

If your switches are not capable of LACP then you will need to choose SLB Bonding.

Then you can choose between active/ active or active / passive (failover) it will work the same as in your supplied screen shots.

http://docs.openvswitch.org/en/latest/topics/bonding/

hope the above helps.

""Cheers
G
 
The problem isn't bonding, i can do that easily enough. The problem is that certain vlans must use certain enslaved interfaces exclusively. SLB may work, however there was massive packet loss and heavy load on the upstream switches when i last tried it.
 
sounds like an MTU issues potentially.

is spanning tree protocol turned on? if so turn it off.

make sure all switch ports are set to jumbo frames 9216 MTU depending on your switch model and brand.

make sure the MTU is set appropriately on the Nic ports to 9000 MTU.

give that a try.

sorry maybe I'm not understanding properly would you mind please explaining why each bonded nic needs to have a different vLan ?

from my understanding OVS does all of that for you without needing to dedicate a vLan to a Nic from my perspective thats what OVS does it allows different vLans to exist on bonded/ bridged connections by encapsulating the vLan ID in a header and tail packet for transportation then gets picked up/ unraveled on the other side and delivered to the correct host/ port/ network segment.

im confused why each nic in your bridged (grouped) OVS bridge needs to have a seperate vLan dedicated lowly too it.

this would just work with a simple bridge and OVS isn't then needed.

what am i missing here?

sorry to sound stupid just not grasping the end goal and why.

""Cheers
G
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!