migrating from simple network config to OpenVSwitch

I'm planning to move my physical firewall into a proxmox VM. For this purpose, I need to "upgrade" my network config. Currently, Proxmox is connected to an access port on my switch. In the new config, proxmox shall be getting all VLANs for passthrough in an lacp trunk port to one VM.

Current config:

Code:
auto lo
iface lo inet loopback

iface enp193s0f1np1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.6
        hwaddress 1c:34:da:7f:b1:53
        netmask 255.255.255.0
        gateway 192.168.1.1
        bridge_ports enp193s0f1np1
        bridge_stp off
        bridge_fd 0

auto vmbr0:0
iface vmbr0:0 inet static
        address 192.168.1.101
        netmask 255.255.255.0

iface enp10s0f0 inet manual

iface enp10s0f1 inet manual

iface enp12s0f3u2u2c2 inet manual

iface enp193s0f0np0 inet manual

After reading through proxmox's Wiki, the OpenVSwitch github page and a bit of googling, I've put together the following for a new config which shall
  • create a bond
  • accept VLANs 10 (untagged), 30, 50, 60, 70 on that bond
  • provide local access for the proxmox host to VLAN 10
It'd be great if someone with a bit of experience could look at this and confirm that this is about right (first config ever with/for OpenVSwitch);

Code:
auto lo
# loopback interface
iface lo inet loopback

# bond
auto bond0
iface bond0 inet manual
        ovs_bridge vmbr0
        ovs_type OVSBond
        ovs_bonds enp193s0f0np0 enp193s0f1np1
        ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast tag=10 vlan_mode=native-untagged trunks=10,30,50,60,70

# bridge for bond, local interface, VMs
auto vmbr0
iface vmbr0 inet manual
        ovs_type OVSBridge
        ovs_ports bond0 vlan10

# proxmox host vlan10 access
auto vlan10
iface vlan10 inet static
        address 192.168.1.6
        netmask 255.255.255.0
        gateway 192.168.1.1

# second IP for proxmox host
auto vlan10:0
iface vlan10:0 inet static
        address 192.168.1.101
        netmask 255.255.255.0

# remaining interfaces

iface enp10s0f0 inet manual

iface enp10s0f1 inet manual

iface enp12s0f3u2u2c2 inet manual

Bonus question:

Currently all VMs use vmbr0. If I change the network config as above, how to I assign access port (VLAN10) to existing VMs; anything I need to change in the VM configs? And how could I provide a trunk with all VLANs to the new firewall VM?

Thanks!
 
Not an ovs expert, so I can’t help much there, like spirit I just use Linux bridges to accomplish what you are doing here for the most part. Just a couple unasked for comments/questions.

1. Why do you need 2 IP on this Proxmox node in the 192.168.1.0/24 subnet? Might cause issues
2. What is the vlan10:0 (colon zero) notation for as I have not seen this used.
 
2. What is the vlan10:0 (colon zero) notation for as I have not seen this used.
:0 is the old way 10 year ago with iproute1. (deprecated since years, and shouldn't be used anymore)

instead, it's possible to simply add multiple time "address ..." in a single interface.
 
Yes, thanks for reminding me that I'm getting old. :)

New version:

Code:
auto lo
# loopback interface
iface lo inet loopback

# bond
auto bond0
iface bond0 inet manual
        ovs_bridge vmbr0
        ovs_type OVSBond
        ovs_bonds enp193s0f0np0 enp193s0f1np1
        ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast tag=10 vlan_mode=native-untagged trunks=10,30,50,60,70

# bridge for bond, local interface, VMs
auto vmbr0
iface vmbr0 inet manual
        ovs_type OVSBridge
        ovs_ports bond0 vlan10

# proxmox host vlan10 access
auto vlan10
iface vlan10 inet static
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=10
        address 192.168.1.6
        address 192.168.1.101
        netmask 255.255.255.0
        gateway 192.168.1.1

# remaining interfaces

iface enp10s0f0 inet manual

iface enp10s0f1 inet manual

iface enp12s0f3u2u2c2 inet manual

It's still not clear to me how I would configure VMs with (a) only access port access (VLAN10 untagged) and (b) trunk access to all VLANs with OpenVSwitch. I can't seem to find the right Google search terms to find documentation for this...
 
Not sure if this will answer your questions, but I use OVS with:
- an OVS Bridge, defining all the vlans in it.
- for each vlan number I create an OVS IntPort.
Then when you create a vm you only have to set the vlan number (OVS IntPort) in the VLAN Tag box and that's it.
(leave VLAN Tag box empty for default VLAN1)
 
The correct proxmox way to set vlan tag on vm nic interface. (if you don't defined a vlan tag, it's a trunk allowing all vlans).
It's possible to filter the allowed vlans in the trunk, not yet in gui, but editing the vm configuration : net0: ....,trunks=10,30,50,60,70


you dont need to create a ovsint port for each vlan in /etc/network/interfaces, ovsintport are only needed if you want to define an ip on the host for this vlan


with ovs, it's also possible to create "fakebridge", where a fake ovs bridge = 1 specific vlan

https://git.proxmox.com/?p=ifupdown2.git;a=commit;h=7aa3a5e6b614d943c76ece6cabc18971ae28339d

but I'm not sure that's the support is already released in the proxmox ifupdown2 package

Code:
auto vmbr0
iface vmbr0 inet manual
        ovs_type OVSBridge

auto vmbr0v10
iface vmbr0v10 inet manual
        ovs_type OVSBridge
        ovs_bridge vmbr0
        ovs_options vmbr0 10
 
Thanks all for the help. I did fiddle around with this quite a bit. I can get things to work up to the point that I create a Linux bond and bridge it:

Code:
# loopback interface
auto lo
iface lo inet loopback

# physical interfaces

iface enp193s0f0np0 inet manual

iface enp193s0f1np1 inet manual

iface enp10s0f0 inet manual

iface enp10s0f1 inet manual

iface enp12s0f3u2u2c2 inet manual

# bond
auto bond0
iface bond0 inet manual
        bond-slaves enp193s0f0np0 enp193s0f1np1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

# bridge for bond, local interface, VMs
auto vmbr0
iface vmbr0 inet static
        address 192.168.1.6
        address 192.168.1.101
        netmask 255.255.255.0
        gateway 192.168.1.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0

But when I try to use the OpenVSwitch and VLAN configuration posted above (and change the ports on my switch to trunk with 10U, 30T, 50T, 60T, 70T), I have no connectivity beyond the local proxmox host. I.e., I can ping both IPs of the host itself on VLAN10, but nothing else, not even the neighboring switch, let alone the gateway or anything behind it.

ifconfig with OpenVSwitch (VMs get started automatically, so there is a ton of fwbr and tap interfaces in there - I removed these to make it easier to read, also because my concern currently is "just" connectivity of the proxmox host):

Code:
bond0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::5c4c:35ff:feff:6bb0  prefixlen 64  scopeid 0x20<link>
        ether 9e:af:fd:62:c9:11  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 14  bytes 1156 (1.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan10: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.6  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::9c1e:aeff:fef5:c536  prefixlen 64  scopeid 0x20<link>
        ether 9e:1e:ae:f5:c5:36  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1741  bytes 78829 (76.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::502c:79ff:fe21:274d  prefixlen 64  scopeid 0x20<link>
        ether 1c:34:da:7f:b1:52  txqueuelen 1000  (Ethernet)
        RX packets 7838  bytes 277226 (270.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 14  bytes 1156 (1.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I have saved dmesg from these attempts. I cannot really find anything helpful in there, but am happy to post any extract with different search terms.

Code:
# cat dmesg.txt | egrep "(ovs|mlx5|enp|openvswitch)"
[    2.688850] tg3 0000:0a:00.1 enp10s0f1: renamed from eth1
[   10.169510] tg3 0000:0a:00.0 enp10s0f0: renamed from eth0
[   10.176089] mlx5_core 0000:c1:00.0: firmware version: 14.32.1010
[   10.176137] mlx5_core 0000:c1:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link)
[   10.485367] mlx5_core 0000:c1:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048)
[   10.489870] mlx5_core 0000:c1:00.0: Port module event: module 0, Cable plugged
[   10.529784] mlx5_core 0000:c1:00.1: firmware version: 14.32.1010
[   10.529861] mlx5_core 0000:c1:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link)
[   10.855325] mlx5_core 0000:c1:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048)
[   10.860555] mlx5_core 0000:c1:00.1: Port module event: module 1, Cable plugged
[   10.901077] mlx5_core 0000:c1:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
[   11.284730] mlx5_core 0000:c1:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295
[   11.299737] mlx5_core 0000:c1:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
[   11.709369] mlx5_core 0000:c1:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295
[   12.187685] mlx5_core 0000:c1:00.1 enp193s0f1np1: renamed from eth1
[   12.270810] mlx5_core 0000:c1:00.0 enp193s0f0np0: renamed from eth0
[   17.669168] openvswitch: Open vSwitch switching datapath
[   18.031024] device ovs-system entered promiscuous mode
[   18.035189] Failed to associated timeout policy `ovs_test_tp'
[   18.258612] device enp193s0f0np0 entered promiscuous mode
[   18.277238] device enp193s0f1np1 entered promiscuous mode
 
why don't use simply use a vlan aware bridge for your config ?


Code:
# bond
auto bond0
iface bond0 inet manual
        bond-slaves enp193s0f0np0 enp193s0f1np1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet manual
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 10,30,50,60,70

auto vmbr0.10
iface vmbr0.10 inet static
        address 192.168.1.6/24
        address 192.168.1.101/24
        gateway 192.168.1.1
 
  • Like
Reactions: ThinkAgain
I have the same problems with a vlan aware bridge... The host can only ping itself, no VMs, not the switch it is connected to, nothing.

Code:
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        ether 1c:34:da:7f:b1:52  txqueuelen 1000  (Ethernet)
        RX packets 6422  bytes 805844 (786.9 KiB)
        RX errors 0  dropped 14  overruns 0  frame 0
        TX packets 1748  bytes 128061 (125.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp193s0f0np0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 1c:34:da:7f:b1:52  txqueuelen 1000  (Ethernet)
        RX packets 4987  bytes 595492 (581.5 KiB)
        RX errors 0  dropped 1  overruns 0  frame 0
        TX packets 751  bytes 53984 (52.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp193s0f1np1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 1c:34:da:7f:b1:52  txqueuelen 1000  (Ethernet)
        RX packets 1435  bytes 210352 (205.4 KiB)
        RX errors 0  dropped 1  overruns 0  frame 0
        TX packets 997  bytes 74077 (72.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::1e34:daff:fe7f:b152  prefixlen 64  scopeid 0x20<link>
        ether 1c:34:da:7f:b1:52  txqueuelen 1000  (Ethernet)
        RX packets 1820  bytes 82515 (80.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 401  bytes 26716 (26.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vmbr0.10: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.6  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::1e34:daff:fe7f:b152  prefixlen 64  scopeid 0x20<link>
        ether 1c:34:da:7f:b1:52  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 393  bytes 25580 (24.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Code:
# cat dmesg.vlan.txt | egrep "(ovs|mlx5|enp|VLAN|vlan|8021q)"
[    2.653742] tg3 0000:0a:00.0 enp10s0f0: renamed from eth0
[    5.276479] tg3 0000:0a:00.1 enp10s0f1: renamed from eth1
[    5.282996] mlx5_core 0000:c1:00.0: firmware version: 14.32.1010
[    5.283042] mlx5_core 0000:c1:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link)
[    5.590152] mlx5_core 0000:c1:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048)
[    5.594760] mlx5_core 0000:c1:00.0: Port module event: module 0, Cable plugged
[    5.634098] mlx5_core 0000:c1:00.1: firmware version: 14.32.1010
[    5.634171] mlx5_core 0000:c1:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link)
[    5.952943] mlx5_core 0000:c1:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048)
[    5.958499] mlx5_core 0000:c1:00.1: Port module event: module 1, Cable plugged
[    5.999201] mlx5_core 0000:c1:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
[    6.357904] mlx5_core 0000:c1:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295
[    6.369227] mlx5_core 0000:c1:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
[    6.754597] mlx5_core 0000:c1:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295
[    7.244875] mlx5_core 0000:c1:00.0 enp193s0f0np0: renamed from eth0
[    7.258369] mlx5_core 0000:c1:00.1 enp193s0f1np1: renamed from eth1
[   14.938696] mlx5_core 0000:c1:00.0 enp193s0f0np0: Link up
[   14.949282] bond0: (slave enp193s0f0np0): Enslaving as a backup interface with an up link
[   15.857613] mlx5_core 0000:c1:00.1 enp193s0f1np1: Link up
[   15.881672] bond0: (slave enp193s0f1np1): Enslaving as a backup interface with an up link
[   16.009189] 8021q: 802.1Q VLAN Support v1.8
[   16.069279] 8021q: adding VLAN 0 to HW filter on device enp193s0f0np0
[   16.072844] 8021q: adding VLAN 0 to HW filter on device enp193s0f1np1
[   16.095141] 8021q: adding VLAN 0 to HW filter on device bond0
[   21.634759] device enp193s0f0np0 entered promiscuous mode
[   21.634817] device enp193s0f1np1 entered promiscuous mode
[   21.654809] mlx5_core 0000:c1:00.0 enp193s0f0np0: S-tagged traffic will be dropped while C-tag vlan stripping is enabled
[   22.071766] mlx5_core 0000:c1:00.0: lag map port 1:1 port 2:2 shared_fdb:0

I also tried adding vlan_filtering 1 and vlan_default_pvid 10 to vmbr0 - and I tried changing the switch config to tag all VLANs (i.e., change 10U into 10T). Neither helped. Once I set the port in the switch to access and remove vlan config from proxmox, all is fine again.

The openvswitch package is still installed. But that shouldn't have an impact, I guess.
 
Yep, I did this on my Cisco switch just in the same way that I have configured other bonded trunks on that thing.

But, I did just find the problem: It was a simple typing error in the config. Looks like the network stack does not complain if you miss a hyphen in the right place... :rolleyes:

That aside, I still don't know why this did not work with OpenVSwitch, but I'll continue with the Linux bridge now for the time being.

Thanks for your help!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!