PVE4 to PVE5 - openvswitch vs bridges

czechsys

Renowned Member
Nov 18, 2015
491
54
93
Hi,

we are currently running PVE4 with openvswitch due ability to add network ports to vlans on switch without needing reconfigure anything on proxmox side interfaces. This works perfectly, but one thing. Every non-physical interface has "state DOWN" or "state UNKNOWN" (even bonds).

Problem is, vmbrX are administratively down for monitoring, rest logical interfaces are without problem

I am looking for solving that issue with PVE5 before upgrade. Debian 9 has reworked vlans support (multiple vlans on one bridge) etc, perfectly working and without issue with "DOWN/UNKNOWN".

So, my questions:

1] openvswitch in PVE5 - what state is showing for vmbrX?
2] if i use new vlan-raw-device setup in PVE5 and not openvswitch, is there way to create setup not requiring to reconfigure network on every PVE node with added/removed vlan?
 
we are currently running PVE4 with openvswitch due ability to add network ports to vlans on switch without needing reconfigure anything on proxmox side interfaces. This works perfectly, but one thing. Every non-physical interface has "state DOWN" or "state UNKNOWN" (even bonds).

Problem is, vmbrX are administratively down for monitoring, rest logical interfaces are without problem

I am looking for solving that issue with PVE5 before upgrade. Debian 9 has reworked vlans support (multiple vlans on one bridge) etc, perfectly working and without issue with "DOWN/UNKNOWN".

So, my questions:

1] openvswitch in PVE5 - what state is showing for vmbrX?

No change in PVE 5 - however: you can set these interfaces UP manually.

2] if i use new vlan-raw-device setup in PVE5 and not openvswitch, is there way to create setup not requiring to reconfigure network on every PVE node with added/removed vlan?

The question is not quite clear for me: of course you have to configure what you want to have. Or do you mean: without rebooting the Proxmox host? The latter is not supported in principle. However, on your own risk you can try:


- set configuration in /etc/network/interfaces, e.g.

Code:
auto vlan99
iface vlan99 inet manual
       vlan_raw_device bond0
       

auto vmbr99
iface vmbr99 inet static
   address  192.168.91.3
   netmask  255.255.255.0
   bridge_ports vlan99
   bridge_stp off
   bridge_fd 0

- Run then

Code:
ifup vlan99
ifup vmbr99
 
Well, setting manualy is no solution for monitoring systems...but we can live with it.

Yes, main target is to allow vlans being used by VMs without needing any reconfiguration/reboot on pve hosts. Those vlans even doesn't have ip on pve host.