OVS netgear config??

Stefan Pettersson

Renowned Member
Feb 7, 2015
34
0
71
Stockholm, Sweden, Sweden
Hello, I need help with whitch configuration I should use with an OVS on proxmox.

I have two nodes with two nic:s eatch. I want the nic:s in a LACP bond and the bridged to vmbr0

I have understodd that it is best practice to use vlan:s so proxmox mngmnt vlan 50

I have a 8 port netgear switch that supports vlan:s and lacp ( gs108e or something) Should the LAGS I have configured on the netgear be tagged with vlan 50 or is there ant other settings on the switch I should make?? So, should the ports be marked with "T" "U" or nothing??

I have tried numerous configs, including the one I found on proxmox.wiki None of them works and thats why i am writing here.

Everything work when adding a Linux bond and Bridge, the config for that is:

# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode 802.3ad

auto vmbr0
iface vmbr0 inet static
address 10.0.0.9
netmask 255.255.255.0
gateway 10.0.0.1
bridge_ports bond0
bridge_stp on
bridge_maxwait 0
bridge_maxage 0
bridge_fd 0
bridge_ageing 0
 
https://forum.proxmox.com/threads/n...inux-bridge-to-openvswitch.24157/#post-121537
On your Host do the following:
  1. apt-get install openvswitch-switch
  2. log into proxmox gui
  3. delete vmbr0
  • create ovsbond with eth0 and eth1
  • create ovsbridge with bond as port
  • create ovsintport for your IP (proxmox gui) on your ovsbridge. (repeat for all IP/Vlan combos your Proxmox HOST should be part of)
  1. reboot proxmox
  2. if you do not get gui access:
  • log into proxmox (local terminal)
  • execute /etc/init.d/networking restart (from there on in it should work properly)

now go to
https://pve.proxmox.com/wiki/Open_vSwitch#Example_2:_Bond_.2B_Bridge_.2B_Internal_Ports
And look at this example:
Code:
# Bond eth0 and eth1 together
allow-vmbr0 bond0
iface bond0 inet manual
  ovs_bridge vmbr0
  ovs_type OVSBond
  ovs_bonds eth0 eth1
  # Force the MTU of the physical interfaces to be jumbo-frame capable.
  # This doesn't mean that any OVSIntPorts must be jumbo-capable. 
  # We cannot, however set up definitions for eth0 and eth1 directly due
  # to what appear to be bugs in the initialization process.
  pre-up ( ifconfig eth0 mtu 9000 && ifconfig eth1 mtu 9000 )
  ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast
  mtu 9000

change the "bond_mode=balance-tcp" to the LCAP mode you want to use. Compare this with https://en.wikipedia.org/wiki/Link_aggregation#Driver_modes
I Personally prefer balance-rr.

Then look up how to set said mode with yoru netgear switch(s).
Without knowing which model you have its hard for me to do that remotely as i do not have any netgear switches at hand.
 
What you do is this:
You assign OVS_intPorts to your Proxmox node.
In order to get access to the web (and therefor updates) you need to make sure at least one of em has a gateway to connect your proxmox node to the internet.

You DO NOT assign any assign ANY IP's to your bridge. Check the link again I quoted, as i only quoted half the example.
Then when you need to give KVM/LXC access to the Bridge you do that via the gui. You basically select the (Bonded-)Bridge, then assign IP's in their appropriate Subnets / Vlan-Tags (if you wanna use those)

Example:
note: you can use any IP-range that tickles your fancy.


On Proxmox-node:
- OVS_intPort ip 192.168.2.101/24 gateway 192.168.2.1 (management IP + Internet provider sitting on 192.168.2.1)
- OVS_IntPort ip 10.1.1.101/24 no gateway (Proxmox Subnet to talk facilitate Proxmox to Storage communication - e.g. Ceph, GLusterFs, NFS, etc)
- OVS_IntPort Ip 10.2.1.101/16 no gateway VlanTag=2 (Used for Proxmox cluster communication via Corosync2)

On 2nd Proxmox-Node:
- OVS_intPort ip 192.168.2.102/24 gateway 192.168.2.1 (management IP + Internet provider sitting on 192.168.2.1)
- OVS_IntPort ip 10.1.1.102/24 no gateway
- OVS_IntPort Ip 10.2.1.102/16 no gateway VlanTag=2

On 3rd Proxmox-Node:
- OVS_intPort ip 192.168.2.103/24 gateway 192.168.2.1 (management IP + Internet provider sitting on 192.168.2.1)
- OVS_IntPort ip 10.1.1.103/24 no gateway
- OVS_IntPort Ip 10.2.1.103/16 no gateway VlanTag=2

On VM (lets say you use a NAS like openmediavault/TrueNAS):
- assign IP 192.168.2.201/24 gateway 192.168.2.1 (web access to your NAS)
- Assign Ip 10.1.1.201/24 no gateway (used to connect NAS via NFS to Proxmox-node

On a Vm that you do not need to talk to Proxmox:
- assign IP 192.168.2.X/24 gateway 192.168.2.1 (web access to your VM)

note:
For KVM you need to set Bridge and VlanTags via GUI. Then inside the VM assign IP/subnet information.
On LXC you assign Bridge + Tag + IP/subnet information via GUI
 
Ok, I have now tried the config and getting more confused. I have the same config on both nodes:

allow-vmbr0 pve0mngmnt
iface pve0mngmnt inet static
address 10.0.0.9
netmask 255.255.255.0
gateway 10.0.0.1
ovs_type OVSIntPort
ovs_bridge vmbr0

auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

allow-vmbr0 bond0
iface bond0 inet manual
ovs_bonds eth0 eth1
ovs_type OVSBond
ovs_bridge vmbr0
ovs_options lacp=active bond_mode=balance-tcp

auto vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
ovs_ports bond0 pve0mngmnt

The IP of the second node is 10.0.0.10

I cant even ping the gateway??

But everything work from the first node??

*EDIT*

Ok, now it's working!!:-)

Now to play around with new settings:-)

Thanks for the help!! Really appriciates it and you deserve a star!!;-)
 
Another problem arised now. Every time i rebbot the nodes i have to bring the interfaces up with /etc/init.d/networking restart Otherwise when i Ping the gateway it says:

Connect: network is unreacheble

How can i fix this problem??

The nodes where also in a cluster during the migration, after migration I removed one node and forced it back again, not best practice so will reinstall...
 
You say you have the config working ...
Whats your current config ? (output of cat /etc/network/interfaces)

You say you have to do /etc/init.d/networking restart every time you boot. The only time i had that exact issue pop up (constantly) was when i was not using the right openvswitch package.

post the output of "pveversion -v" please.

you should have afaik "openvswitch-switch: 2.3.2-2"
 
This is the interface config:

allow-vmbr0 pve0mngmnt
iface pve0mngmnt inet static
address 10.0.0.9
netmask 255.255.255.0
gateway 10.0.0.1
ovs_type OVSIntPort
ovs_bridge vmbr0

auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

allow-vmbr0 bond0
iface bond0 inet manual
ovs_bonds eth0 eth1
ovs_type OVSBond
ovs_bridge vmbr0
ovs_options lacp=active bond_mode=balance-tcp

auto vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
ovs_ports bond0 pve0mngmnt

And pve -v is:

proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-48 (running version: 4.0-48/0d8559d0)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-22
qemu-server: 4.0-30
pve-firmware: 1.1-7
libpve-common-perl: 4.0-29
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-25
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-9
pve-container: 1.0-6
pve-firewall: 2.0-12
pve-ha-manager: 1.0-9
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve4~jessie
openvswitch-switch: 2.3.0+git20140819-3