Open Vswitch Bridge not starting.

alsenior

New Member
Jan 4, 2017
3
0
1
35
Hi everyone,

I have a new install and I'm trying to get openvswitch working for VM network access. I have created the bridge but the bridge remains in the Down state. The underlying interface is UP but there is no traffic going across the bridge which stays in the down state and the OVS-intport stays in the unknown state.

This is the OVS config from interfaces:

auto vmbr1
iface vmbr1 inet manual
ovs_type OVSBridge
ovs_ports mgmt eth2

allow-vmbr1 mgmt
iface mgmt inet static
address 10.101.250.35
netmask 255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=900

allow-vmbr1 eth2
iface eth2 inet manual
ovs_type OVSPort
ovs_bridge vmbr1

This installed package version is 2.6.0-2
 
Hi, Test with this setting

Https://pve.proxmox.com/wiki/Open_vSwitch#Bridges

Examples
Example 1: Bridge + Internal Ports + Untagged traffic

The only drawback I have had is the time to restart the physical proxmox. The only solution I found was to create a restart cron:

# Crontab -e

@reboot /etc/init.d/networking restart


Remember to configure the port of your Switch in Trunk mode, in Cisco Catalyst:

# Configure terminal

## interface GBX / X / X

### switchport mode trunk
 
Hi Luis,

Just tried that but no dice unfortunately. The Bond part of the OVS bridge works as can be seen from the switch:
ALICE-BREAKOUT-SW#show etherchannel summary
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met
u - unsuitable for bundling
w - waiting to be aggregated
d - default port


Number of channel-groups in use: 1
Number of aggregators: 1

Group Port-channel Protocol Ports
------+-------------+-----------+-----------------------------------------------
1 Po1(SU) LACP Gi0/1(P) Gi0/2(P)

But the OVS setup does not look to work

18: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1
link/ether ce:7a:db:ee:54:dc brd ff:ff:ff:ff:ff:ff
19: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1
link/ether 00:26:55:4b:55:ae brd ff:ff:ff:ff:ff:ff
inet6 fe80::226:55ff:fe4b:55ae/64 scope link
valid_lft forever preferred_lft forever
20: mgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1
link/ether 96:5f:38:1e:59:84 brd ff:ff:ff:ff:ff:ff
inet 10.101.250.35/24 brd 10.101.250.255 scope global mgmt
valid_lft forever preferred_lft forever
inet6 fe80::945f:38ff:fe1e:5984/64 scope link
valid_lft forever preferred_lft forever
21: bond1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1
link/ether 06:5c:12:2d:14:8a brd ff:ff:ff:ff:ff:ff
inet6 fe80::45c:12ff:fe2d:148a/64 scope link
valid_lft forever preferred_lft forever

This is the interface config now.

allow-vmbr1 bond1
iface bond1 inet manual
ovs_bonds eth2 eth3
ovs_type OVSBond
ovs_bridge vmbr1
ovs_options lacp=active bond_mode=balance-slb

auto vmbr0
iface vmbr0 inet static
address 10.101.250.32
netmask 255.255.255.0
gateway 10.101.250.254
bridge_ports eth0
bridge_stp off
bridge_fd 0

allow-vmbr1 mgmt
iface mgmt inet static
address 10.101.250.35
netmask 255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=900


auto vmbr1
allow-ovs vmbr1
iface vmbr1 inet manual
ovs_type OVSBridge
ovs_ports mgmt bond1

Eth interfaces are omitted because they all show as up.
 
HI Alsenior

1) But does bonding work for you?

2) I recommend that you do not use LACP, I was testing with a Tool called IPTRAF, and only use one Interface. The other is that you read my POST in the Forum, called "Network Slow Machines", so that they take into account some inconveniences that I found and that affects the speed of the virtual machines.
 
Hi Luis,

Not sure why but it all just started working after the last reboot of the host.
 
restarting networking on proxmox never works for me, I've always got to reboot to test new configurations.

Its very odd to have 2 interfaces on the same subnet, that will probably cause odd issues. You also really shouldn't have both a linux bridge and an ovs bridge on the same host.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!