proxmox 7.0 sdn beta test

it does the trick thank you spirit.
and yes this is quite bit confusing. is it used for multitenancy?
well, some users have asked about it, for some specific setups.


It could be used to use vlan tag over vxlan for example.
I have also some users needing triple tag ^_^. (qinq zone with double tag, and another tag at vm level).
or some users want to do qinq, with a vlan zone at proxmox level, and users manage the second vlan inside the vlan guest.

https://pve.proxmox.com/pve-docs/chapter-pvesdn.html#pvesdn_config_vnet
"VLAN Aware
Allow to add an extra VLAN tag in the virtual machine or container vNIC configurations or allow the guest OS to manage the VLAN’s tag."

I'll try to make the doc more explicit too.
 
Hello.

I have some question about provisionning cloud-init image in SDN proxmox cluster.
I thought when i clone a vm with cloud init process enable , the vm will inherite of the gateway given in vnet/network in SDN module.
I did n' notice this with my test. Si what is the purpose of the gateway field in SDN module?

Thank you for your answer.
 
Hello.

I have some question about provisionning cloud-init image in SDN proxmox cluster.
I thought when i clone a vm with cloud init process enable , the vm will inherite of the gateway given in vnet/network in SDN module.
I did n' notice this with my test. Si what is the purpose of the gateway field in SDN module?

Thank you for your answer.
currently, the ipam module don't allocated yet ip address for vm (cloudinit) or ct (in ct config directly).
(when it'll be done, an ip will be get from ipam in the defined subnet + the gateway).
I'm still working on it.

Currently, the gateway is only used by routed zones ( "simple" && "bgp-evpn"), as the gateway is the ip on the vnet directly.
 
Hello
i'm facing a new problem with the SDN module.
the vwan interface on the VWAN zone is on error on 1 node only (among a 3 node cluster) : "error iface vwan"

How can i check /correct this?

Is there a troubleshoot guide for SDN? which are the log files to check?


Ty
 

Attachments

  • 20211124-essos.tar
    10 KB · Views: 2
Hello
i'm facing a new problem with the SDN module.
the vwan interface on the VWAN zone is on error on 1 node only (among a 3 node cluster) : "error iface vwan"

How can i check /correct this?

Is there a troubleshoot guide for SDN? which are the log files to check?


Ty
the config seem to be correctly generated.
can you send me the result of "ifreload -a -d" on this node ?


(btw, you can use 1 zone with multiple vnets if you want, no need to defined 1zone=1vnet )
 
ifreload result is join.

I know that i dont need to define 1 zone per vnet.
zone that i defined are logical zone WAN, front, back. I will create more vnet on each zone when i will need it. May be my comprehension of what a zone is are not correct.
Is multitenancy/rights assignement the only purpose of zone ?
 

Attachments

  • 20211124-ifreload.log
    34.4 KB · Views: 2
ifreload result is join.

I know that i dont need to define 1 zone per vnet.
zone that i defined are logical zone WAN, front, back. I will create more vnet on each zone when i will need it. May be my comprehension of what a zone is are not correct.
Is multitenancy/rights assignement the only purpose of zone ?

the reload seem to be fine.
can you send also the result of "ifquery -a -c" ?

the veriification is done by this command.

About the zone, currently, it's mainly for :
- permissions assignments (not yet 100% finished, it's still missing the filtering of vmbrX in vm nic gui currently)
- you can define a zone on only speficic nodes


(I don't known exactly your usecade, but if all yours zones use the same vmbrX, and are assigned to all nodes and you don't need specific permissions, you can use 1 zone).
 
i think i will get one zone like you said.


this is the result for ifquery -a -c
auto lo iface lo inet loopback auto enp7s0f0 iface enp7s0f0 inet static [pass] address 10.50.0.50/24 [pass] auto eno2 iface eno2 inet manual auto eno3 iface eno3 inet manual auto eno4 iface eno4 inet manual auto bond10 iface bond10 inet manual [pass] bond-slaves eno2 eno3 eno4 [pass] bond-miimon 100 [pass] bond-mode 802.3ad [pass] auto vmbr0 iface vmbr0 inet static [pass] bridge-ports eno1 [pass] bridge-fd 0 [pass] bridge-stp no [pass] address 10.20.0.50/24 [pass] auto vmbr10 iface vmbr10 inet manual [pass] bridge-ports bond10 [pass] bridge-stp no [pass] bridge-fd 0 [pass] bridge-vlan-aware yes [pass] bridge-vids 2-4094 [] auto vlan51 iface vlan51 inet static [pass] vlan-raw-device enp7s0f0 [pass] vlan-id 51 [pass] address 10.51.0.50/24 [pass] auto admin iface admin [pass] bridge-ports vmbr10.2100 [pass] bridge-fd 0 [pass] bridge-stp no [pass] auto bo1 iface bo1 [pass] bridge-ports vmbr10.2011 [pass] bridge-fd 0 [pass] bridge-stp no [pass] auto fo1 iface fo1 [pass] bridge-ports vmbr10.2001 [pass] bridge-fd 0 [pass] bridge-stp no [pass] auto k8s iface k8s [pass] bridge-ports vmbr10.2500 [pass] bridge-fd 0 [pass] bridge-stp no [pass] auto vwan iface vwan [fail] bridge-ports vmbr10.10 [pass] bridge-fd 0 [pass] bridge-stp no [pass] address 2a01:e34:ee55:b9f1:e4d7:feff:fe01:5de8/64 [fail]

thank you
 
address 2a01:e34:ee55:b9f1:e4d7:feff:fe01:5de8/64 [fail]

the error is because of this ipv6 ip

the ifreload show
info: vwan: netlink: ip addr del 2a01:e34:ee55:b9f1:e4d7:feff:fe01:5de8/64 dev vwan

I don't known from where it's coming from ?
maybe do you have autoconf && accept_ra enabled for ipv6 on this wan network ?
 
the error is because of this ipv6 ip

the ifreload show


I don't known from where it's coming from ?
maybe do you have autoconf && accept_ra enabled for ipv6 on this wan network ?
Thank you. your advice make the trick.
idisable autoconf on this specific iface
 
Hi, i just wanted to check can we make virtual switch with multiple gateways just like esxi has option in creating multiple gateway with different subnet on vkernel level. With this sdn plugin
Hi,
I dont known too much how vmware is working, but do mean , running the gateway ips directly on proxmox ? (so routing setup).
or defined subnets where gateway is outside proxmox (on an external box/router/firewall ...), so bridged setup.
 
Hi,
I dont known too much how vmware is working, but do mean , running the gateway ips directly on proxmox ? (so routing setup).
or defined subnets where gateway is outside proxmox (on an external box/router/firewall ...), so bridged setup.
Actually i have 2 physical nics i wanted to to assign 1 nic with different gateway for user and other nic for proxmox mgmt purpose only. Which is possible in vmware by creating n number of virtual switches
 
Actually i have 2 physical nics i wanted to to assign 1 nic with different gateway for user and other nic for proxmox mgmt purpose only. Which is possible in vmware by creating n number of virtual switches
you don't need to define subnets/sgateway on proxmox bridge to get them working. (with ou without sdn feature). It's a simple layer2 switch, it don't care about ips address going through.

currently, on sdn, it's possible to add subnets/gateway, but it's only used for routed setup, to define the gatway on proxmox itself. and later, it'll be used to auto assign ips address to vms/ct on theses subnets.
 
Hi is it possible to define 2 virtual switch with 2 gateway with sdn. Or is there any workaround in proxmox for this.
you don't need to defined gateway on proxmox side, to get them working inside the vms. (you just need to define them inside your vm os).

But yes, sdn allow to define 2 vnets with differents subnet/gateways.
Currently they are only used to defined layer3 routes zones (simple/evpn), where the gateway is proxmox itself.
for vlan or layer2 zone, the gateway defined on sdn is doing nothing.
(The roadmap is to implement ipam, to autoconfigure ip/gateway inside the vm/ct os)
 
Hi, in my setup I have two nodes in a cluster (and a Qdevice witness), and from what I understand, I need to have spanning tree protocol enabled on the bridge to be able to use a redundant managed switch setup.

So I have a vmbr0 configured with openvswitch and spanning tree protocol enabled
Code:
auto vmbr0
iface vmbr0 inet manual
        ovs_type OVSBridge
        ovs_ports eno1 eno2 lan
        ovs_options stp_enable=true

and then I just attach the VM network interfaces to this bridge and use the VLAN tag to put them on different network segments.

Then the proxmox nodes are attached to two managed switches in parallel, one port goes to each switch (both switches have the same configuration)

So far so good. It's been fine for many months in this way.

If I try to use the the beta sdn module to set up interfaces as VLANs I get intermittent network outage (after a while I can't ping a VM in the cluster anymore, then after a bit more time it becomes available again, and this is the case for all VMs, not just one).

this is for example what is created in /etc/network/interfaces.d/sdn if I add a new VLAN-based VNET called "albylan", to the same vmbr0
Code:
auto albylan
iface albylan
bridge_ports ln_albylan
bridge_stp off
bridge_fd 0

auto ln_albylan
iface ln_albylan
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=10

auto vmbr0
iface vmbr0
ovs_ports ln_albylan

Now, I see that you are turning off spanning tree protocol in the bridge ports in the config generated with this plugin. I'm not an expert here, but can that be the cause of the issues I'm getting? Would enabling spanning tree protocol on all these vnet bridges help in my case?

Can I just edit the file manually to have bridge_stp on and then do a
Code:
systemctl networking restart

Or is there more to change?

Or am I completely off track and my issue is not related to spanning tree protocol?
 
Last edited:
Hi, in my setup I have two nodes in a cluster (and a Qdevice witness), and from what I understand, I need to have spanning tree protocol enabled on the bridge to be able to use a redundant managed switch setup.

So I have a vmbr0 configured with openvswitch and spanning tree protocol enabled
Code:
auto vmbr0
iface vmbr0 inet manual
        ovs_type OVSBridge
        ovs_ports eno1 eno2 lan
        ovs_options stp_enable=true

and then I just attach the VM network interfaces to this bridge and use the VLAN tag to put them on different network segments.

Then the proxmox nodes are attached to two managed switches in parallel, one port goes to each switch (both switches have the same configuration)

So far so good. It's been fine for many months in this way.

If I try to use the the beta sdn module to set up interfaces as VLANs I get intermittent network outage (after a while I can't ping a VM in the cluster anymore, then after a bit more time it becomes available again, and this is the case for all VMs, not just one).

this is for example what is created in /etc/network/interfaces.d/sdn if I add a new VLAN-based VNET called "albylan", to the same vmbr0
Code:
auto albylan
iface albylan
bridge_ports ln_albylan
bridge_stp off
bridge_fd 0

auto ln_albylan
iface ln_albylan
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=10

auto vmbr0
iface vmbr0
ovs_ports ln_albylan

Now, I see that you are turning off spanning tree protocol in the bridge ports in the config generated with this plugin. I'm not an expert here, but can that be the cause of the issues I'm getting? Would enabling spanning tree protocol on all these vnet bridges help in my case?

Can I just edit the file manually to have bridge_stp on and then do a
Code:
systemctl networking restart

Or is there more to change?

Or am I completely off track and my issue is not related to spanning tree protocol?
mmm, this is strange, because the sdn is creating a new linux bridge (albylan) plugged into your ovs (vmbr0). And it should not change stp configuration on your vmbr0. (vmbr0 configuration is /etc/network/interfaces.d/sdn is merged with main /etc/network/interfaces.d/sdn ).

you can try edit /etc/network/interfaces.d/sdn and reload conf if "ifreload -a" but it'll be override on next sdn change apply.
So if you find a good working config, just tell me, and I'll try to see to add an option in sdn. (or inherit stp conf from main bridge)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!