proxmox 7.0 sdn beta test

First of all thanks for all this amazing job on SDN !

I've tried to make some test on IPAMs, but am I missing something or this not implemented at the VM/CT level ?
I didn't find where to inform CT / VM to check for IP with the IPAM

I've also check the IPAM code and I'll dev a plugin for EfficientIP/SOLIDserver it should be rather simple as there is a REST API,
however our IPAM also register DNS record so I would probably create a "fake" EfficientIP/SOLIDserver DNS plugin which should do nothing but check that IPAM part has already done the job.
 
First of all thanks for all this amazing job on SDN !

I've tried to make some test on IPAMs, but am I missing something or this not implemented at the VM/CT level ?
I didn't find where to inform CT / VM to check for IP with the IPAM

yes currently the vm/ct is not yet implemented. I'm hoping to finish them soon.
(I have a working lxc patch qemu is a little bit more complex)

if you want to test, the subnets should already by created in ipam, and if you define a simple zone, the gateway of a subnet will be registered to the ipam.


I've also check the IPAM code and I'll dev a plugin for EfficientIP/SOLIDserver it should be rather simple as there is a REST API,
I need to a support for custom plugins too, like for storage;. (but If you want to test, you can replace the code of a existing plugin).

however our IPAM also register DNS record so I would probably create a "fake" EfficientIP/SOLIDserver DNS plugin which should do nothing but check that IPAM part has already done the job.
dns plugin is optionnal, you can use the ipam plugin only.
 
  • Like
Reactions: Moayad
I mean , before I'm doing a patch, can you try to change directly in /etc/network/interfaces.d/sdn

Code:
auto brol4057
iface brol4057
bridge_ports z_brol4057
bridge_stp off
bridge_fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
mtu 9000
(changing bridge_ports)

and do a ifreload -a, and check that it's ok.
Hi Spirit,

I have tried to add this but seems its failing,


Code:
cat /etc/network/interfaces.d/sdn
#version:3

auto brol1302
iface brol1302
        bridge_ports z_brol4057.1302
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

auto brol2071
iface brol2071
        bridge_ports z_brol4057.2071
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

auto broll13
iface broll13
        bridge_ports z_brol4057.13
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

auto vmbr0
iface vmbr0
        bridge-vlan-protocol 802.1q

auto z_brol4057
iface z_brol4057
        mtu 9000
        bridge-stp off
        bridge-ports vmbr0.4057
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

auto brol4057
iface brol4057
        bridge_ports z_brol4057
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

giving below error


Code:
ifreload -a
warning: bond0: attribute bond-min-links is set to '0'
warning: brol4057: apply bridge ports settings: cmd '/bin/ip -force -batch - [link set dev z_brol4057 master brol4057]' failed: returned 1 (Error: Can not enslave a bridge to a bridge.
Command failed -:1
)
 
@pieteras.meyer

can you try a simple vnet on qinq zone, with tag=1 ?

I have done test, and I think it should work. (tag=1 will be removed, and you should have only stag 4057 coming to vmbr0)
Hi Spirit,

interfaces are now added but traffic is not flowing on qinq vlans added on this interface on the vm

Code:
ip link show  | grep mtu | column -t
1:   lo:                          <LOOPBACK,UP,LOWER_UP>                    mtu  65536  qdisc  noqueue  state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
2:   eth0:                        <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP>   mtu  9000   qdisc  mq       master  bond0       state  UP       mode   DEFAULT  group  default  qlen  1000
3:   eth1:                        <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP>   mtu  9000   qdisc  mq       master  bond0       state  UP       mode   DEFAULT  group  default  qlen  1000
4:   bond0:                       <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP>  mtu  9000   qdisc  noqueue  master  vmbr0       state  UP       mode   DEFAULT  group  default  qlen  1000
5:   vlan4057@bond0:              <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  master  vmbr4057    state  UP       mode   DEFAULT  group  default  qlen  1000
6:   vmbr4057:                    <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  state   UP          mode   DEFAULT  group  default  qlen   1000
7:   vlan16@bond0:                <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  state   UP          mode   DEFAULT  group  default  qlen   1000
8:   vlan17@bond0:                <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  state   UP          mode   DEFAULT  group  default  qlen   1000
9:   vlan20@bond0:                <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  state   UP          mode   DEFAULT  group  default  qlen   1000
10:  vmbr0:                       <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  state   UP          mode   DEFAULT  group  default  qlen   1000
19:  vmbr0.4057@vmbr0:            <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  master  z_brol4057  state  UP       mode   DEFAULT  group  default  qlen  1000
20:  z_brol4057:                  <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  state   UP          mode   DEFAULT  group  default  qlen   1000
21:  z_brol4057.1302@z_brol4057:  <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  master  brol1302    state  UP       mode   DEFAULT  group  default  qlen  1000
22:  brol1302:                    <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  state   UP          mode   DEFAULT  group  default  qlen   1000
23:  z_brol4057.2071@z_brol4057:  <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  master  brol2071    state  UP       mode   DEFAULT  group  default  qlen  1000
24:  brol2071:                    <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  state   UP          mode   DEFAULT  group  default  qlen   1000
25:  z_brol4057.13@z_brol4057:    <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  master  broll13     state  UP       mode   DEFAULT  group  default  qlen  1000
26:  broll13:                     <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  state   UP          mode   DEFAULT  group  default  qlen   1000
28:  z_brol4057.1@z_brol4057:     <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  master  brol4057    state  UP       mode   DEFAULT  group  default  qlen  1000
29:  brol4057:                    <BROADCAST,MULTICAST,UP,LOWER_UP>         mtu  9000   qdisc  noqueue  state   UP          mode   DEFAULT  group  default  qlen   1000

Code:
root@pve00:~# brctl show
bridge name     bridge id               STP enabled     interfaces
brol1302                8000.0cc47a4dd528       no              z_brol4057.1302
brol2071                8000.0cc47a4dd528       no              z_brol4057.2071
brol4057                8000.0cc47a4dd528       no              tap21001i2
                                                        z_brol4057.1
broll13         8000.0cc47a4dd528       no              z_brol4057.13
vmbr0           8000.0cc47a4dd528       no              bond0
                                                        tap21001i0
                                                        tap21001i1
z_brol4057              8000.0cc47a4dd528       no              vmbr0.4057
root@pve00:~# brctl show
bridge name     bridge id               STP enabled     interfaces
brol1302                8000.0cc47a4dd528       no              z_brol4057.1302
brol2071                8000.0cc47a4dd528       no              z_brol4057.2071
brol4057                8000.0cc47a4dd528       no              tap21001i2
                                                        z_brol4057.1
broll13         8000.0cc47a4dd528       no              z_brol4057.13
vmbr0           8000.0cc47a4dd528       no              bond0
                                                        tap21001i0
                                                        tap21001i1
z_brol4057              8000.0cc47a4dd528       no              vmbr0.4057

Code:
root@pve00:~# cat /etc/network/interfaces.d/sdn
#version:4

auto brol1302
iface brol1302
        bridge_ports z_brol4057.1302
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

auto brol2071
iface brol2071
        bridge_ports z_brol4057.2071
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

auto brol4057
iface brol4057
        bridge_ports z_brol4057.1
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

auto broll13
iface broll13
        bridge_ports z_brol4057.13
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

auto vmbr0
iface vmbr0
        bridge-vlan-protocol 802.1q

auto z_brol4057
iface z_brol4057
        mtu 9000
        bridge-stp off
        bridge-ports vmbr0.4057
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

Code:
root@pve00:~# qm config 21001
agent: 1
bootdisk: virtio0
cores: 2
cpu: host
memory: 1024
name: chr-v7-beta
net0: virtio=52:54:00:89:14:19,bridge=vmbr0,queues=2
net1: virtio=52:54:00:A6:10:64,bridge=vmbr0,link_down=1,queues=2
net2: virtio=52:54:00:25:ED:50,bridge=brol4057
numa: 0
onboot: 0
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=2a9ad897-8d96-41df-8655-99061d3f715a
sockets: 1
tablet: 0
virtio0: pve-ceph-pool-1:vm-21001-disk-0,cache=none,size=128M
vmgenid: 2813354f-61ca-4e4e-af89-c890d085f848
 
@pieteras.meyer

can you try a simple vnet on qinq zone, with tag=1 ?

I have done test, and I think it should work. (tag=1 will be removed, and you should have only stag 4057 coming to vmbr0)
Hi Spirit,

I have also tried below on the vm but no luck, still no traffic on any qinq vlans only on untagged vlan1

Code:
root@pve00:~# qm set 21001 --net2 virtio=52:54:00:25:ED:50,bridge=brol4057,trunks='2-4094'
update VM 21001: -net2 virtio=52:54:00:25:ED:50,bridge=brol4057,trunks=2-4094

root@pve00:~# qm config 21001
agent: 1
bootdisk: virtio0
cores: 2
cpu: host
memory: 1024
name: chr-v7-beta
net0: virtio=52:54:00:89:14:19,bridge=vmbr0,queues=2
net1: virtio=52:54:00:A6:10:64,bridge=vmbr0,link_down=1,queues=2
net2: virtio=52:54:00:25:ED:50,bridge=brol4057,trunks=2-4094
numa: 0
onboot: 0
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=2a9ad897-8d96-41df-8655-99061d3f715a
sockets: 1
tablet: 0
virtio0: pve-ceph-pool-1:vm-21001-disk-0,cache=none,size=128M
vmgenid: 2813354f-61ca-4e4e-af89-c890d085f848
 
@pieteras.meyer
just to be sure to understand your need,

do you want to be able to see inside this vm 21001 (which could be at zone level stag) , all differents ctag coming from other vm vnets ? (are you doing some kind of shared gateway/router vm ?)


can you try this ?

Code:
auto pr_brol4057
iface pr_brol4057
        link-type veth
        veth-peer-name ln_brol4057

auto ln_brol4057
iface ln_brol4057
        link-type veth
        veth-peer-name pr_brol4057

auto brol4057
iface brol4057
        bridge_ports pr_brol4057
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000
        
        
auto z_brol4057
iface z_brol4057
        mtu 9000
        bridge-stp off
        bridge-ports vmbr0.4057 ln_brol4057
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094


I have done this, and with this, vm in brol4057 can see traffic of others vnets in this qinq zone with the original c-tag
 
Last edited:
@pieteras.meyer
just to be sure to understand your need,

do you want to be able to see inside this vm 21001 (which could be at zone level stag) , all differents ctag coming from other vm vnets ? (are you doing some kind of shared gateway/router vm ?)
Hi Spirit,

Im simulating a customer setup, so as explained before vlan 4057 is a STAG on our network to encapsulate all this customers traffic inside this Stag 4057,

The customer has Servers some Linux and some Windows with Access ports listed below that we have configured vnets for below

broll13 (customer vlan 13) our side qinq 4057.13
brol1302 (customer vlan 1302) our side qinq 4057.1302
brol2071 (customer vlan 2071) our side qinq 4057.2071

JUST FYI the above Vnets does allow qinq capabilities to the customer just one example below

brol2071 (customer vlan 2071.x ) our side qinq 4057.2071.(2-4094)

Then the Customer require a Trunk port for his routers and Firewalls for which we need the VNET brol4057 that will allow the customer to add any vlan, we provide mostly L2 network traffic, the customer on the switches out side of Proxmox also have external connections that we bring inside the Stag 4057, hence the requirement for a trunk port

brol4057 (customer trunk port vlan 2-4094) our side qinq 4057.(2-4094)

Now the Above setup does work with OVS with the vlan_mode=dot1q-tunnel parameter on the ports, but as explained due to some performance issues I have noted Im trying to move away from OVS

Hope this makes more Sense
 
Last edited:
Hi Spirit,

Im simulating a customer setup, so as explained before vlan 4057 is a STAG on our network to encapsulate all this customers traffic inside this Stag 4057,

The customer has Servers some Linux and some Windows with Access ports listed below that we have configured vnets for below

broll13 (customer vlan 13) our side qinq 4057.13
brol1302 (customer vlan 1302) our side qinq 4057.1302
brol2071 (customer vlan 2071) our side qinq 4057.2071

JUST FYI the above Vnets does allow qinq capabilities to the customer just one example below

brol2071 (customer vlan 2071.x ) our side qinq 4057.2071.(2-4094)

Then the Customer require a Trunk port for his routers and Firewalls for which we need the VNET brol4057 that will allow the customer to add any vlan, we provide mostly L2 network traffic, the customer on the switches out side of Proxmox also have external connections that we bring inside the Stag 4057, hence the requirement for a trunk port

brol4057 (customer trunk port vlan 2-4094) our side qinq 4057.(2-4094)

Now the Above setup does work with OVS with the vlan_mode=dot1q-tunnel parameter on the ports, but as explained due to some performance issues I have noted Im trying to move away from OVS

Hope this makes more Sense
ok, can you try the /etc/network/interfaces.d/sdn config I have posted just before ?
 
ok, can you try the /etc/network/interfaces.d/sdn config I have posted just before ?
Hi Spirt,

I have applied the requested, see below is my new config and this does seem to work

Code:
#version:4

auto brol1302
iface brol1302
        bridge_ports z_brol4057.1302
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

auto brol2071
iface brol2071
        bridge_ports z_brol4057.2071
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

auto broll13
iface broll13
        bridge_ports z_brol4057.13
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

auto vmbr0
iface vmbr0
        bridge-vlan-protocol 802.1q

auto pr_brol4057
iface pr_brol4057
        link-type veth
        veth-peer-name ln_brol4057

auto ln_brol4057
iface ln_brol4057
        link-type veth
        veth-peer-name pr_brol4057

auto brol4057
iface brol4057
        bridge_ports pr_brol4057
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000


auto z_brol4057
iface z_brol4057
        mtu 9000
        bridge-stp off
        bridge-ports vmbr0.4057 ln_brol4057
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

Code:
root@pve00:~# brctl show
bridge name     bridge id               STP enabled     interfaces
brol1302                8000.0cc47a4dd528       no              z_brol4057.1302
brol2071                8000.0cc47a4dd528       no              tap21001i2
                                                        z_brol4057.2071
brol4057                8000.46a9b9b45647       no              pr_brol4057
broll13         8000.0cc47a4dd528       no              z_brol4057.13
vmbr0           8000.0cc47a4dd528       no              bond0
                                                        tap21001i0
                                                        tap21001i1
z_brol4057              8000.0cc47a4dd528       no              ln_brol4057
                                                        vmbr0.4057
                                                        
root@pve00:~# ip link show  | grep mtu | column -t
1:   lo:                          <LOOPBACK,UP,LOWER_UP>                     mtu  65536  qdisc  noqueue     state   UNKNOWN     mode   DEFAULT  group  default  qlen   1000
2:   eth0:                        <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP>    mtu  9000   qdisc  mq          master  bond0       state  UP       mode   DEFAULT  group  default  qlen  1000
3:   eth1:                        <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP>    mtu  9000   qdisc  mq          master  bond0       state  UP       mode   DEFAULT  group  default  qlen  1000
4:   bond0:                       <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP>   mtu  9000   qdisc  noqueue     master  vmbr0       state  UP       mode   DEFAULT  group  default  qlen  1000
7:   vlan16@bond0:                <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
8:   vlan17@bond0:                <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
9:   vlan20@bond0:                <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
10:  vmbr0:                       <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
19:  vmbr0.4057@vmbr0:            <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     master  z_brol4057  state  UP       mode   DEFAULT  group  default  qlen  1000
20:  z_brol4057:                  <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
21:  z_brol4057.1302@z_brol4057:  <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     master  brol1302    state  UP       mode   DEFAULT  group  default  qlen  1000
22:  brol1302:                    <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
23:  z_brol4057.2071@z_brol4057:  <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     master  brol2071    state  UP       mode   DEFAULT  group  default  qlen  1000
24:  brol2071:                    <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
25:  z_brol4057.13@z_brol4057:    <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     master  broll13     state  UP       mode   DEFAULT  group  default  qlen  1000
26:  broll13:                     <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
29:  brol4057:                    <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  9000   qdisc  noqueue     state   UP          mode   DEFAULT  group  default  qlen   1000
30:  tap21001i0:                  <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  mq          master  vmbr0       state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
31:  tap21001i1:                  <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  mq          master  vmbr0       state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
32:  tap21001i2:                  <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>  mtu  9000   qdisc  pfifo_fast  master  brol2071    state  UNKNOWN  mode   DEFAULT  group  default  qlen  1000
33:  pr_brol4057@ln_brol4057:     <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  1500   qdisc  noqueue     master  brol4057    state  UP       mode   DEFAULT  group  default  qlen  1000
34:  ln_brol4057@pr_brol4057:     <BROADCAST,MULTICAST,UP,LOWER_UP>          mtu  1500   qdisc  noqueue     master  z_brol4057  state  UP       mode   DEFAULT  group  default  qlen  1000
 
@pieteras.meyer
can you test: http://mutulin1.odiso.net/libpve-network-perl_0.5-2_all.deb ?

Code:
wget http://mutulin1.odiso.net/libpve-network-perl_0.5-2_all.deb
dpkg -i libpve-network-perl_0.5-2_all.deb
systemctl restart pvedaemon

then create a vnet without tag in the qinq zone, it should see other tagged vnet in this zone.

if possible, can you try with vlanaware bridge vmbr0 (your current setup) && also ovs vmbr0 ? (I have also done some cleanup in ovs code, as in the conf we have multiple ovs_ports ...)
 
Hi Spirit,

I have tested both with vlanaware bridge and ovs bridges and all seem to work as expected.

ok thanks ! I'll send patch to proxmox devs, it should be release in coming days.
Thank you for your efforts much apreciated
Thanks to you too ! It's great to have real usecases to improve the features.
 
  • Like
Reactions: pieteras.meyer
@Matthieu Le Corre

I have send patch to the mailing list to support custom ipam plugin,
https://lists.proxmox.com/pipermail/pve-devel/2021-April/047999.html
I have made a deb for testing:
http://mutulin1.odiso.net/libpve-network-perl_0.5-2_all.deb

you need to add your plugin in /usr/share/perl5/PVE/Network/SDN/Ipams/Custom/youplugin.pm

here a structure example (This is the same than other plugin with a extra "sub api" )

Code:
package PVE::Network::SDN::Ipams::Custom::MyCustomIpamPlugin;

use strict;
use warnings;
use PVE::INotify;
use PVE::Cluster;
use PVE::Tools;

use base('PVE::Network::SDN::Ipams::Plugin');

sub type {
    return 'mycustomipam';
}


sub api {
    return 1;
}

sub options {

    return {
        url => { optional => 0},
        token => { optional => 0 },
    };
}

# Plugin implementation

sub add_subnet {
    my ($class, $plugin_config, $subnetid, $subnet, $noerr) = @_;
}

sub del_subnet {
    my ($class, $plugin_config, $subnetid, $subnet, $noerr) = @_;

}

sub add_ip {
    my ($class, $plugin_config, $subnetid, $subnet, $ip, $hostname, $mac, $description, $is_gateway, $noerr) = @_;

}

sub update_ip {
    my ($class, $plugin_config, $subnetid, $subnet, $ip, $hostname, $mac, $description, $is_gateway, $noerr) = @_;
}

sub add_next_freeip {
    my ($class, $plugin_config, $subnetid, $subnet, $hostname, $mac, $description, $noerr) = @_;

}
sub del_ip {
    my ($class, $plugin_config, $subnetid, $subnet, $ip, $noerr) = @_;

}

sub verify_api {
    my ($class, $plugin_config) = @_;

}

sub on_update_hook {
    my ($class, $plugin_config) = @_;

}

1;
 
Capture d’écran 2021-05-04 à 13.17.07.png

Hi,
small cosmetical glitch ! both packages have the same names...
For what's it's worth ( not much, really regarding the usefulness of this SDN stuff !!!)

Merci
Etienne
 
View attachment 25746

Hi,
small cosmetical glitch ! both packages have the same names...
For what's it's worth ( not much, really regarding the usefulness of this SDN stuff !!!)

Merci
Etienne
oh, good catch, thanks.
It's only the package description, so no problem. I'll fix that for next release.
 
View attachment 25746

Hi,
small cosmetical glitch ! both packages have the same names...
For what's it's worth ( not much, really regarding the usefulness of this SDN stuff !!!)

Merci
Etienne
So, you found out what the packaging of the SDN is based on ;-)

It's only the package description, so no problem. I'll fix that for next release.
FYI, just done that directly https://git.proxmox.com/?p=pve-network.git;a=commitdiff;h=90c150b25bfe881bff977adfee0e3c96a41ba675
I'll review/apply the pending patches from you now.
 
  • Like
Reactions: jlebherz and spirit
Hi all,

i've just started with playing around with the SDN software and reading this thread not sure if this is a bug, or operator error,

however I ran into the following

I have two nodes in a cluster
hv01
hv02

I tried setting up a vxlan using the same controller and this all worked and I saw VNET1 appear on all nodes
I subsequently tried setting up bgp evpn which I could push no issues and i see VNET02 appear on all nodes, however when I then tries to bind the new vnet to a container I get an error:


run_buffer: 314 Script exited with status 2
lxc_create_network_priv: 3068 No such device - Failed to create network device
lxc_spawn: 1786 Failed to create the network
__lxc_start: 1999 Failed to spawn container "147"
TASK ERROR: startup for container '147' failed

So i figured maybe as this is a beta this might be a bug and i reverted the configuration back to VXLAN only, using VNET1 and the above error remains. I can now only use default bridges on both my nodes.

I have removed both ovswitch and the perl script however the error remains. Is this a known issue? or is this just me being daft somehow?

Happy to help push this functionality further as it is indeed pretty brilliant,

Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!