proxmox 7.0 sdn beta test

@pieteras.meyer

here a new deb : http://odisoweb1.odiso.net/libpve-network-perl_0.4-4_all.deb (I don't have update to 0.4-5, but they are new feature).

to install,

Code:
wget http://odisoweb1.odiso.net/libpve-network-perl_0.4-4_all.deb
dpkg -i libpve-network-perl_0.4-4_all.deb
systemctl restart pveproxy
systemctl restart pvedaemon


You can now enable vlan tag && thrunks in the vm nic options. (That's mean 3 layers of tag with qinq plugin, or 2 layers of tag with vlan plugin)

you just need to edit:

/etc/pve/sdn/vnets.cfg , and in your vnet, add "vlanaware 1" option
example:

Code:
vnet: mynet
          tag 3000
          zone qinqzone
          vlanaware 1


I think it shoud fill all your cases, mixing vlan && qinq plugin , with/without vlanaware option.
Hi Spirit,

for the following scenario doesn't seem to work,

Code:
zone.cfg

vlan: vlan
        bridge vmbr0

vnet.cfg

vnet: vnet4040
        tag 4040
        zone vlan
        vlanaware 1

for vlan 4040 and then allow guest to trunk vlans any, similar to below
1. we add service vlan infront of any VM/customer traffic, so like eth0.4040.(any customer traffic untagged or any tagged 1-4094)
Code:
- Attaching a virtual's network adapter to the bridge and specifying a VLAN ID:
      VM's network configuration line:
        net0: virtio=E4:8D:8C:82:94:97,bridge=vmbr0,tag=1
      Generated command:
        /usr/bin/ovs-vsctl add-port vmbr0 tap101i0 vlan_mode=dot1q-tunnel tag=4040 other-config:qinq-ethtype=802.1q
      Result:
        Virtual router can communicate with all other network devices perfectly, herewith examples:
          Interface                   - VM                  - Network                   = Testing
          ether1                      - Untagged            - VLAN 4040                    = OK
          ether1-vlan50               - 802.1Q:1            - VLAN 4040 with QinQ 50       = OK
          ether1-vlan50-vlan10        - 802.1Q:1_802.1Q:50  - VLAN 4040 with QinQinQ 50:10 = OK
          ether1-vlan60               - 802.1Q:1_802.1Q:60  - VLAN 4040 with QinQ 60       = OK

and how would I do below example?
3. then we can have as of above but instead of anytag we allow a range of say 10-20, [ eth0.4040.(10-20)]
maybe like below?
Code:
auto vmbr1
iface vmbr1 inet manual
    bridge-ports none
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 10-20
    mtu 9000

I have also notices that the host and vm's loose network for a couple of seconds when apply config or ifreload -a -d / ifreload -a is run
is this expected?
If this is expected it will mean that these configs cannot be added during operating hours?
 
Yes , this worked and think maybe ifupdown2 should do this check?
yes, I already have a script in ifupdown2 package to do the config rewrite on package install,
for ovs for example (convert allow-vmbrX ... to auto ...).
I'll look to add the missing source here.
 
So,

zone.cfg

vlan: vlan
bridge vmbr0

vnet.cfg

vnet: vnet4040
tag 4040
zone vlan
vlanaware 1

for vlan 4040 and then allow guest to trunk vlans any, similar to below
1. we add service vlan infront of any VM/customer traffic, so like eth0.4040.(any customer traffic untagged or any tagged 1-4094)

if vmbr0 is an ovs, it should build something like that

vmbr0----ovsintport tag4040)----->vnetbrige(vlanware)-------tag(at vm nic level or from vm)----->vm

I'm not sure why ovs, but maybe I forget to defined "vlan_mode=dot1q-tunnel" , in the vlan plugin.
(I have done it for qinq plugin).
I think it should works with a linux vlanaware vmbr0 bridge instead ovs.

can you send content of /etc/network/interfaces.d/sdn ?
should be easy to fix it.

3. then we can have as of above but instead of anytag we allow a range of say 10-20, [ eth0.4040.(10-20)]
maybe like below?
I don't have implemented it yet, I need to add an option in vnet to restrict vlan range.
or it could already be done in vm configuration (but not available in gui), you can edit the vm configuration /etc/pve/qemu-server/<vmid>.conf,"net0:.....,trunks=10-20".


I have also notices that the host and vm's loose network for a couple of seconds when apply config or ifreload -a -d / ifreload -a is run
is this expected?
If this is expected it will mean that these configs cannot be added during operating hours?
mmmm, this is unexpected. Maybe it's ovs specific in my ifupdown2 implementation. (ovs-vsctl is pretty shitty for change, to I'm deleting conf and recreate it, but it doesn't seem to be atomic :/ )
I'm pretty sure that it don't happen with linux bridge as vmbr0. I'll do test to see what happen.[/QUOTE][/QUOTE]
 
I can reproduce the network packet loss on reload. I confirm that it's a bug in ovs plugin in ifupdown2. I'm looking to see how to improve that, but it's technically possible to reload without network interruption.
 
Hi Spirit
can you send content of /etc/network/interfaces.d/sdn ?
should be easy to fix it.

see below
Code:
cat /etc/network/interfaces.d/sdn
#version:39

auto ln_vnet4040
iface ln_vnet4040
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=4040

auto vmbr0
iface vmbr0
        ovs_type OVSBridge
        ovs_ports ln_vnet4040

auto vnet4040
iface vnet4040
        bridge_ports ln_vnet4040
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
 
Hi,

seems after loading this version my sdn file changes version but does not generate and data
root@pve00:~# cat /etc/network/interfaces.d/sdn
#version:51
do you mean that file is empty ? that's pretty strange. (this could happen with bad option in /etc/pve/sdn/vnets.cfg or /etc/pve/sdn/zones.cfg, can you send me their content )



I just reuploaded
http://odisoweb1.odiso.net/libpve-network-perl_0.4-4_all.deb with my last changes
and also ifupdown2 3.0
http://odisoweb1.odiso.net/ifupdown2_3.0.0-1+pve1_all.deb

(This should fix network loss on reload with openvswitch)
 
Hi Spirit
do you mean that file is empty ? that's pretty strange. (this could happen with bad option in /etc/pve/sdn/vnets.cfg or /etc/pve/sdn/zones.cfg, can you send me their content )
See below as requested

Code:
root@pve00:~# cat /etc/pve/sdn/vnets.cfg
vnet: v4040
        tag 4040
        zone zvlan

root@pve00:~# cat /etc/pve/sdn/zones.cfg
vlan: zvlan
        bridge vmbr0

root@pve00:~# cat /etc/network/interfaces.d/sdn
#version:51

I just reuploaded
http://odisoweb1.odiso.net/libpve-network-perl_0.4-4_all.deb with my last changes
and also ifupdown2 3.0
http://odisoweb1.odiso.net/ifupdown2_3.0.0-1+pve1_all.deb

(This should fix network loss on reload with openvswitch)
I will load these and test ASAP
 
Hi

I have loaded the above and with ifreload -a -d and still getting loss
pve.JPG

Code:
pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown2: 3.0.0-1+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-network-perl: 0.4-4
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-6
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
 

Attachments

Hi Spirit,

as a test I removed all sdn config and rebooted the host and reloaded the config, after this I printed the below
Code:
root@pve00:~# cat /etc/network/interfaces.d/sdn
#version:55

auto ln_vnet4040
iface ln_vnet4040
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=4040

auto vmbr0
iface vmbr0
        ovs_ports ln_vnet4040

auto vnet4040
iface vnet4040
        bridge_ports ln_vnet4040
        bridge_stp off
        bridge_fd 0
root@pve00:~# cat /etc/pve/sdn/vnets.cfg
vnet: vnet4040
        tag 4040
        zone zvlan

root@pve00:~# cat /etc/pve/sdn/zones.cfg
vlan: zvlan
        bridge vmbr0

root@pve00:~# cat /etc/pve/sdn/.version
55

root@pve00:~# brctl show
bridge name     bridge id               STP enabled     interfaces
vnet4040                8000.da421ce566a1       no              ln_vnet4040
root@pve00:~# ovs-vsctl list-ports switch_c | xargs -n1 ip link show  | grep mtu | column -t
ovs-vsctl: no bridge named switch_c
1:   lo:           <LOOPBACK,UP,LOWER_UP>             mtu  65536  qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
2:   eth0:         <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  mq       master  ovs-system  state  UP       mode   DEFAULT  group  default  qlen  1000
3:   eth1:         <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  mq       master  ovs-system  state  UP       mode   DEFAULT  group  default  qlen  1000
4:   ovs-system:   <BROADCAST,MULTICAST>              mtu  1500   qdisc  noop     state   DOWN        mode   DEFAULT  group  default  qlen   1000
5:   vmbr0:        <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
6:   vlan1:        <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
7:   vlan18:       <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
8:   vlan20:       <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
9:   vlan21:       <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
10:  vlan23:       <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
11:  vlan2:        <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
12:  bond0:        <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  1500   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
13:  ln_vnet4040:  <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  1500   qdisc  noqueue  master  vnet4040    state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
14:  vnet4040:     <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  1500   qdisc  noqueue  state   UP          mode   DEFAULT  group  default  qlen   1000
root@pve00:~# ovs-vsctl show
094148a3-ee91-4420-97b8-741af933c9ef
    Bridge "vmbr0"
        Port "vlan23"
            tag: 23
            Interface "vlan23"
                type: internal
        Port "vlan21"
            tag: 21
            Interface "vlan21"
                type: internal
        Port "bond0"
            Interface "eth0"
            Interface "eth1"
        Port "vlan18"
            tag: 18
            Interface "vlan18"
                type: internal
        Port "vlan2"
            tag: 2
            Interface "vlan2"
                type: internal
        Port "vmbr0"
            Interface "vmbr0"
                type: internal
        Port "ln_vnet4040"
            tag: 4040
            Interface "ln_vnet4040"
                type: internal
        Port "vlan20"
            tag: 20
            Interface "vlan20"
                type: internal
        Port "vlan1"
            Interface "vlan1"
                type: internal
    ovs_version: "2.12.0"
root@pve00:~# ovs-vsctl get Port ln_vnet4040 vlan_mode
[]

I don't see the dot1qtunnel ? or should I add this option somewhere
 
Hi Spirit

I tried with below code but mtu on bridge still 1500 , and still get network drop when applying configpve.JPG
Code:
root@pve00:~# cat /etc/pve/sdn/zones.cfg
vlan: zvlan
        bridge vmbr0
        mtu 9000
root@pve00:~# cat /etc/pve/sdn/vnets.cfg
vnet: vnet4040
        tag 4040
        zone zvlan
        vlanaware 1

cat /etc/network/interfaces.d/sdn
#version:57

auto ln_vnet4040
iface ln_vnet4040
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options vlan_mode=dot1q-tunnel tag=4040

auto vmbr0
iface vmbr0
        ovs_ports ln_vnet4040

auto vnet4040
iface vnet4040
        bridge_ports ln_vnet4040
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000
root@pve00:~# ovs-vsctl show
094148a3-ee91-4420-97b8-741af933c9ef
    Bridge "vmbr0"
        Port "vlan23"
            tag: 23
            Interface "vlan23"
                type: internal
        Port "vlan21"
            tag: 21
            Interface "vlan21"
                type: internal
        Port "bond0"
            Interface "eth0"
            Interface "eth1"
        Port "ln_vnet4040"
            tag: 4040
            Interface "ln_vnet4040"
                type: internal
        Port "vlan18"
            tag: 18
            Interface "vlan18"
                type: internal
        Port "vlan2"
            tag: 2
            Interface "vlan2"
                type: internal
        Port "vmbr0"
            Interface "vmbr0"
                type: internal
        Port "vlan20"
            tag: 20
            Interface "vlan20"
                type: internal
        Port "vlan1"
            Interface "vlan1"
                type: internal
    ovs_version: "2.12.0"
root@pve00:~# ovs-vsctl get Port ln_vnet4040 vlan_mode
"dot1q-tunnel"
root@pve00:~# ovs-vsctl list-ports switch_c | xargs -n1 ip link show  | grep mtu | column -t
ovs-vsctl: no bridge named switch_c
1:   lo:           <LOOPBACK,UP,LOWER_UP>             mtu  65536  qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
2:   eth0:         <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  mq       master  ovs-system  state  UP       mode   DEFAULT  group  default  qlen  1000
3:   eth1:         <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  mq       master  ovs-system  state  UP       mode   DEFAULT  group  default  qlen  1000
4:   ovs-system:   <BROADCAST,MULTICAST>              mtu  1500   qdisc  noop     state   DOWN        mode   DEFAULT  group  default  qlen   1000
5:   vmbr0:        <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
6:   vlan1:        <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
7:   vlan18:       <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
8:   vlan20:       <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
9:   vlan21:       <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
10:  vlan23:       <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
11:  vlan2:        <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
12:  bond0:        <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  1500   qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
15:  ln_vnet4040:  <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  1500   qdisc  noqueue  master  vnet4040    state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
16:  vnet4040:     <BROADCAST,MULTICAST,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  state   UP          mode   DEFAULT  group  default  qlen   1000
root@pve00:~# brctl show
bridge name     bridge id               STP enabled     interfaces
vnet4040                8000.7a58d4e18abd       no              ln_vnet4040
 
Hi,
about packet loss on reload,

this should work with the new config /etc/pve/network/interfaces.d/sdn

"
auto vmbr0
iface vmbr0
ovs_ports ln_vnet4040
"

+ ifupdown2 3.0

(can you send me the result of ifreload -a -d ?)



About the dot1qtunnel, for vlan plugin, it'll be enabled if you add vlan-aware option on the vnet.
for qinq plugin, it'll be enabled by default.


I'll look for the mtu.
 
Hi spirit

I don't have implemented it yet, I need to add an option in vnet to restrict vlan range.
or it could already be done in vm configuration (but not available in gui), you can edit the vm configuration /etc/pve/qemu-server/<vmid>.conf,"net0:.....,trunks=10-20".

I tried adding the trunk= 2-4094 in the vm config file see below my config, but as soon as I add this, the network adapter disappears from gui
see net5 below
Code:
agent: 1
bootdisk: virtio0
cores: 1
cpu: host,flags=+md-clear;+pcid;+spec-ctrl;+ssbd;+aes
memory: 256
name: LAB-1
net0: virtio=52:54:00:FF:3E:92,bridge=vmbr0,tag=4
net1: virtio=52:54:00:F9:11:DA,bridge=vmbr0,tag=100
net2: virtio=52:54:00:A9:7B:54,bridge=vmbr0,tag=101
net3: virtio=52:54:00:F6:E0:80,bridge=vmbr0,tag=102
net4: virtio=52:54:00:47:FE:4C,bridge=vmbr0,tag=103
net5: virtio=52:54:00:15:E8:78,bridge=vnet4040,trunk=2-4094
numa: 1
onboot: 0
ostype: l26
smbios1: uuid=dd0a2457-75a0-48d7-a65d-2243957e153d
sockets: 1
tablet: 0
virtio0: local-lvm:vm-201-disk-0,cache=none,size=128M
vmgenid: 51daab4a-a7b5-4098-a0e4-1416c391efb8
pve.JPG
 
Hi Spirit


+ ifupdown2 3.0

(can you send me the result of ifreload -a -d ?)

see attached txt file for ifreload -a -d and below config from /etc/pve/network/interfaces.d/sdn

Code:
root@pve00:~# cat /etc/network/interfaces.d/sdn
#version:57

auto ln_vnet4040
iface ln_vnet4040
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options vlan_mode=dot1q-tunnel tag=4040

auto vmbr0
iface vmbr0
        ovs_ports ln_vnet4040

auto vnet4040
iface vnet4040
        bridge_ports ln_vnet4040
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

root@pve00:~# cat /etc/pve/sdn/vnets.cfg
vnet: vnet4040
        tag 4040
        zone zvlan
        vlanaware 1

root@pve00:~# cat /etc/pve/sdn/zones.cfg
vlan: zvlan
        bridge vmbr0
        mtu 9000
 

Attachments

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!