proxmox 7.0 sdn beta test

About the mtu,
This is because ifupdown2 set 1500 by default, if "mtu ..." is not defined.
as we use ovs_mtu for ovs, it set first to 1500 on interface, and 9000 to ovs (and ovs overwrite again the mtu on lnterface).

I look to fix this, but I don't think it can give packet loss.


about your config
Code:
auto lo
iface lo inet loopback
        pre-up ifconfig eth0 mtu 9000
        pre-up ifconfig eth1 mtu 9000
pre-up shouldn't be necessary with ifupdown2. (they are a bug in ifupdown1 where you need to defined mtu on ethX in a bond, but I have fixed this in ifupdown2, ethX interfaces inherits mtu defined on the bond).


could you try to do other "ifup <interfaces> -d" on other interfaces ? (bridges, or ovs interfaces)

I really don't see from where it come from... ovs-vsctl command in reload debug log seem to be ok now. and I can't reproduce on my side :/
 
I don't known if it's related, but I can reproduce packet loss, with an ethX in a ovs bridge,

ovs_mtu 1500 then a simple
"echo 1499 > /sys/class/net/eth0/mtu"

give me packet loss for some seconds.

on a ovsint port, the echo command don't change the mtu.


Could you try to edit /etc/network/interfaces && /etc/network/interfaces.d/sdn , and add both ovs_mtu + mtu with same values on all interfaces where you need mtu 9000 ?

for example

Code:
auto vlan20
iface vlan20 inet static
        address 10.255.20.9/24
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_mtu 9000
        mtu 9000
        ovs_options tag=20
 
This is because ifupdown2 set 1500 by default, if "mtu ..." is not defined.
as we use ovs_mtu for ovs, it set first to 1500 on interface, and 9000 to ovs (and ovs overwrite again the mtu on lnterface).

I look to fix this, but I don't think it can give packet loss.
Hi Spirit,

I think this could be the issue, if this is first setting the MTU on interface back to 1500 and then again to 9000 tis could cause the interface to drop for a moment,

I have loaded the new ifupdown2_3.0.0-1+pve1_all.deb , I have rebooted the host for good measure, see attached config files of /etc/network/interfaces && /etc/network/interfaces.d/sdn,

Im still getting network loss for about 2seconds when doing ifreload -a -d see attached screenshot and files

external_vm.JPG
 

Attachments

Ok, I don't see the mtu change in log with the last ifupdown2 version.


Seem to be another problem.... I really don't known...

can you try to "ifup <interface> -d" for each interfaces/bridge/... in /etc/network/interfaces && /etc/network/interfaces.d/sdn ?
 
ok sorry,

I'm still seeing

info: executing ifconfig eth0 mtu 9000
info: executing ifconfig eth1 mtu 9000
(from your pre-up)

info: writing "1500" to file /sys/class/net/eth0/mtu
(because no ovs-mtu)


can you remove pre-up from lo

Code:
auto lo
iface lo inet loopback
        pre-up ifconfig eth0 mtu 9000
        pre-up ifconfig eth1 mtu 9000
"

and add

Code:
auto eth0
iface eth0 inet manual
    ovs_mtu 9000

auto eth1
iface eth1 inet manual
    ovs_mtu 9000
 
ok sorry,

I'm still seeing

info: executing ifconfig eth0 mtu 9000
info: executing ifconfig eth1 mtu 9000
(from your pre-up)

info: writing "1500" to file /sys/class/net/eth0/mtu
(because no ovs-mtu)


can you remove pre-up from lo

Code:
auto lo
iface lo inet loopback
        pre-up ifconfig eth0 mtu 9000
        pre-up ifconfig eth1 mtu 9000
"

and add

Code:
auto eth0
iface eth0 inet manual
    ovs_mtu 9000

auto eth1
iface eth1 inet manual
    ovs_mtu 9000
Hi Spirit

the last changes seem to have done the trick, I dont see packetloss anymore
 

Attachments

Hi Spirit

the last changes seem to have done the trick, I dont see packetloss anymore

ok, finally, great !
I don't have a way to avoir this if you keep mtu change in a pre-up script. So the really good way is to define ovs_mtu on the interfaces.
It's like that in the wiki too : https://pve.proxmox.com/wiki/Open_vSwitch

All others patches in ifupdown2, pve-network,pve-manger,... have been applied by proxmox teams yesterday, so they should be available soon in pvetest repository. (no need to download them anymore from my own server).

I'll you like to thank you for your time, your usecase is very interesting, and we have fixed lot of bugs together.

I'm waiting for your next request ;)
 
ok, finally, great !
I don't have a way to avoir this if you keep mtu change in a pre-up script. So the really good way is to define ovs_mtu on the interfaces.
It's like that in the wiki too : https://pve.proxmox.com/wiki/Open_vSwitch

I'll you like to thank you for your time, your usecase is very interesting, and we have fixed lot of bugs together.
I'm waiting for your next request ;)
Hi Spirit,

Thanks you for all your assistance and help, one last note, seems ifupdown2 seems to rewrite ovs_mtu 9000 to just mtu 9000 in
/etc/network/interfaces but it works fine now

Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
        mtu 9000
#eth0 - 1_eth-0-38 - 1G

auto eth1
iface eth1 inet manual
        mtu 9000
#eth1 - 2_eth-0-38 - 1G
have been applied by proxmox teams yesterday, so they should be available soon in pvetest
Any idea when this will reach normal and enterprise repos? my other severs are using the the enterprise repo's as they are production servers, would not want to apply the pvetest repos to these servers

Thanks again for you help and patience
 
Hi Spirit,

Thanks you for all your assistance and help, one last note, seems ifupdown2 seems to rewrite ovs_mtu 9000 to just mtu 9000 in
/etc/network/interfaces but it works fine now

Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
        mtu 9000
#eth0 - 1_eth-0-38 - 1G

auto eth1
iface eth1 inet manual
        mtu 9000
#eth1 - 2_eth-0-38 - 1G

mmm, the rewrite is done on config management from proxmox (and rewrite when you install/upgrade ifupdown2.
It's works in this case, because I'm applying ovs-mtu from bond to ethX interfaces too.
I'll look to keep ovs_mtu on ethX in configuration


Any idea when this will reach normal and enterprise repos? my other severs are using the the enterprise repo's as they are production servers, would not want to apply the pvetest repos to these servers

I can give you an extact date, but for no-subscription it should be in this month.
package versions will be:

pve-manager: 6.2-6
libpvenetwork: 0.4-6
ifupdown2: 3.0.0-1+pve2

I have also a small fix in pve-common:
https://git.proxmox.com/?p=pve-common.git;a=commit;h=HEAD
when you use trunks + default vlan !=1 on a vm nic option.

I'll look to see to add trunks options in gui too.


Thanks again for you help and patience
Thanks to you !

Does the current code match your needs ? can you do all your differents setups ?
 
Hi,
first of all: thank you very much for this fantastic job!!
I'll starting extensive testing in my lab, particularly focused on VXLAN and OpenVSwitch.

For now let me report a small typo:

root@munich ~ # apt info libpve-network-perl
...
Description: Proxmox VE storage management library
This package contains the storage management library used by Proxmox VE.

Definetely we are not talking about storage.

Thanks again.
Massimo.
 
Hi,
first of all: thank you very much for this fantastic job!!
I'll starting extensive testing in my lab, particularly focused on VXLAN and OpenVSwitch.

For now let me report a small typo:

root@munich ~ # apt info libpve-network-perl
...
Description: Proxmox VE storage management library
This package contains the storage management library used by Proxmox VE.

Definetely we are not talking about storage.

Thanks again.
Massimo.

Hi, thanks for the report ;)

I have done a lof of openvswitch fixes this week, so maybe can you wait a little bit that new packages are available in pvetest or no subscription repo.
For vxlan, it should be ok
 
Thanks to you !

Does the current code match your needs ? can you do all your differents setups ?
Hi Spirit,

for now it seems to fit my needs and that the other patch from David herselman would not be needed anymore, the SDN will replace this, can I safely replace the original code for /usr/share/perl5/PVE/Network.pm ?
https://forum.proxmox.com/threads/proxmox-5-0-and-ovs-with-dot1q-tunnel.34090/

Code:
Patched /usr/share/perl5/PVE/Network.pm:
--- /usr/share/perl5/PVE/Network.pm.orig        2020-05-08 16:54:14.734230861 +0200
+++ /usr/share/perl5/PVE/Network.pm     2020-05-08 16:55:14.739249932 +0200
@@ -249,8 +249,10 @@
     # first command
     push @$cmd, '--', 'add-port', $bridge, $iface;
     push @$cmd, "tag=$tag" if $tag;
-    push @$cmd, "trunks=". join(',', $trunks) if $trunks;
-    push @$cmd, "vlan_mode=native-untagged" if $tag && $trunks;
+    push @$cmd, "vlan_mode=dot1q-tunnel" if $tag;
+    push @$cmd, "other-config:qinq-ethtype=802.1q" if $tag;
+    push @$cmd, "cvlans=". join(',', $trunks) if $trunks && $tag;
+    push @$cmd, "trunks=". join(',', $trunks) if $trunks && !$tag;
 
Hi Spirit

I have now switched my lab server to the pvetest repo, you prepviosly mentioned that that MTU would be on all options on zones for vlan and qinq and also vnets for both vlan and qinq, below I added a new zone via GUI zvlan1 and vnet4041 attached to this zone, below you would see
their is no MTU 9000 or vlan ware conigured

Code:
root@pve00:~# cat /etc/pve/sdn/zones.cfg
vlan: zvlan
        bridge vmbr0
        mtu 9000

qinq: qinq4040
        bridge vmbr0
        tag 4040
        mtu 9000

vlan: zvlan1
        bridge vmbr0

root@pve00:~# cat /etc/pve/sdn/vnets.cfg
vnet: vnet4040
        tag 4040
        zone zvlan

vnet: vnet20
        tag 20
        zone qinq4040

vnet: vnet4041
        tag 4041
        zone zvlan1

root@pve00:~# cat /etc/network/interfaces.d/sdn
#version:67

auto ln_vnet4040
iface ln_vnet4040
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=4040

auto ln_vnet4041
iface ln_vnet4041
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=4041

auto sv_qinq4040
iface sv_qinq4040
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options vlan_mode=dot1q-tunnel tag=4040 other_config:qinq-ethtype=802.1q

auto vmbr0
iface vmbr0
        ovs_type OVSBridge
        ovs_ports ln_vnet4040

auto vnet20
iface vnet20
        bridge_ports z_qinq4040.20
        bridge_stp off
        bridge_fd 0
        mtu 9000

auto vnet4040
iface vnet4040
        bridge_ports ln_vnet4040
        bridge_stp off
        bridge_fd 0
        mtu 9000

auto vnet4041
iface vnet4041
        bridge_ports ln_vnet4041
        bridge_stp off
        bridge_fd 0

auto z_qinq4040
iface z_qinq4040
        mtu 9000
        bridge-stp off
        bridge-ports sv_qinq4040
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

below is some screenshots, where you can see on the add vlan there is no MTU options or vlan aware options
it would be nice to have MTU , Vlan aware and bridge-vids so we can set vlan or vlan range here and also the option for 802.1q or 802.1ad,
if you you choose vlan aware, to also set the dot1q-tunnel in ovs

vlan_zone.JPG

qinq_zone.JPG

create_vnet.JPG

without the vlanware option and MTU in vlan zone vnet4040 lost its dot1q-tunnel and 9000 mtu see below

Code:
ovs-vsctl list-ports switch_c | xargs -n1 ip link show  | grep mtu | column -t
ovs-vsctl: no bridge named switch_c
1:   lo:                        <LOOPBACK,UP,LOWER_UP>                     mtu  65536  qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
2:   eth0:                      <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  mq          master  ovs-system  state  UP       mode   DEFAULT  group  default  qlen  1000
3:   eth1:                      <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  mq          master  ovs-system  state  UP       mode   DEFAULT  group  default  qlen  1000
4:   ovs-system:                <BROADCAST,MULTICAST>                      mtu  1500   qdisc  noop        state   DOWN        mode   DEFAULT  group  default  qlen   1000
5:   vmbr0:                     <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
6:   vlan1:                     <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
7:   vlan18:                    <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
8:   vlan20:                    <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
9:   vlan21:                    <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
11:  vlan2:                     <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
12:  bond0:                     <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
13:  ln_vnet4040:               <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  1500   qdisc  noqueue     master  vnet4040    state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
14:  vnet4040:                  <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
15:  tap20101i0:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
16:  tap20101i1:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
17:  tap20101i2:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
18:  tap20101i3:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
19:  tap20101i4:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
20:  tap20101i5:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  vnet4040    state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
22:  z_qinq4040:                <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
23:  z_qinq4040.20@z_qinq4040:  <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     master  vnet20      state  UP       mode   DEFAULT  group  default  qlen  1000
24:  vnet20:                    <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
25:  tap20102i0:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
26:  tap20102i1:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
27:  tap20102i2:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
28:  tap20102i3:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
29:  tap20102i4:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
30:  tap20102i5:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  vnet20      state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
31:  sv_qinq4040:               <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  1500   qdisc  noqueue     master  z_qinq4040  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
32:  ln_vnet4041:               <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  1500   qdisc  noqueue     master  vnet4041    state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
33:  vnet4041:                  <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  1500   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000


root@pve00:~# ovs-vsctl get Port ln_vnet4040 vlan_mode
[]
 
Hi spirit

after re-installing the version i downloaded from your website

dpkg -i libpve-network-perl_0.4-5_all.deb
dpkg -i ifupdown2_3.0.0-1+pve1_all.deb
systemctl restart pvedaemon
systemctl restart pveproxy
pvesh set /cluster/sdn/

the below code looks to be correct again, maybe the versions on the pvetest repo is not correct yet

Code:
root@pve00:~# cat /etc/network/interfaces.d/sdn
#version:68


auto ln_vnet4040
iface ln_vnet4040
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_mtu 9000
        ovs_options vlan_mode=dot1q-tunnel other_config:qinq-ethtype=802.1q tag=4040

auto ln_vnet4041
iface ln_vnet4041
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_mtu 9000
        ovs_options vlan_mode=dot1q-tunnel other_config:qinq-ethtype=802.1q tag=4041

auto sv_qinq4040
iface sv_qinq4040
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_mtu 9000
        ovs_options vlan_mode=dot1q-tunnel tag=4040 other_config:qinq-ethtype=802.1q

auto vmbr0
iface vmbr0
        ovs_ports sv_qinq4040
        ovs_ports ln_vnet4040
        ovs_ports ln_vnet4041

auto vnet20
iface vnet20
        bridge_ports z_qinq4040.20
        bridge_stp off
        bridge_fd 0
        mtu 9000

auto vnet4040
iface vnet4040
        bridge_ports ln_vnet4040
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

auto vnet4041
iface vnet4041
        bridge_ports ln_vnet4041
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

auto z_qinq4040
iface z_qinq4040
        mtu 9000
        bridge-stp off
        bridge-ports sv_qinq4040
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
root@pve00:~# ovs-vsctl get Port ln_vnet4040 vlan_mode
"dot1q-tunnel"
root@pve00:~# ovs-vsctl get Port ln_vnet4041 vlan_mode
"dot1q-tunnel"
root@pve00:~# ovs-vsctl list-ports switch_c | xargs -n1 ip link show  | grep mtu | column -t
ovs-vsctl: no bridge named switch_c
1:   lo:                        <LOOPBACK,UP,LOWER_UP>                     mtu  65536  qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
2:   eth0:                      <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  mq          master  ovs-system  state  UP       mode   DEFAULT  group  default  qlen  1000
3:   eth1:                      <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  mq          master  ovs-system  state  UP       mode   DEFAULT  group  default  qlen  1000
4:   ovs-system:                <BROADCAST,MULTICAST>                      mtu  1500   qdisc  noop        state   DOWN        mode   DEFAULT  group  default  qlen   1000
5:   vmbr0:                     <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
6:   vlan1:                     <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
7:   vlan18:                    <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
8:   vlan20:                    <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
9:   vlan21:                    <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
11:  vlan2:                     <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
12:  bond0:                     <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
13:  ln_vnet4040:               <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     master  vnet4040    state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
14:  vnet4040:                  <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
15:  tap20101i0:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
16:  tap20101i1:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
17:  tap20101i2:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
18:  tap20101i3:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
19:  tap20101i4:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
20:  tap20101i5:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  vnet4040    state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
22:  z_qinq4040:                <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
23:  z_qinq4040.20@z_qinq4040:  <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     master  vnet20      state  UP       mode   DEFAULT  group  default  qlen  1000
24:  vnet20:                    <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
25:  tap20102i0:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
26:  tap20102i1:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
27:  tap20102i2:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
28:  tap20102i3:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
29:  tap20102i4:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  ovs-system  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
30:  tap20102i5:                <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  vnet20      state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
33:  vnet4041:                  <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
34:  sv_qinq4040:               <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     master  z_qinq4040  state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
35:  ln_vnet4041:               <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     master  vnet4041    state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!