proxmox 7.0 sdn beta test

@mcr1974

Hi,

1 - I have private vlan interfaces setup on the hetzner vlan. My assumption is that I cannot use this to create the local proxmox host bridge (see https://forum.proxmox.com/threads/qinq-on-hetzner-vswitch.62071/ "they confirmed to me that neither QinQ nor VXLAN is possible on top of the Hetzner vSwitches"). Is this hetzner vlan setup interfering with the proxmox sdn even if I'm not using for proxmox?

I don't known too much hertzner, but it's quite possible that they already use vxlan for their vrack/vlan (looking at the mt 1400, it seem that they have a least the 50bytes overhead of vxlan), but I'm really not sure

But I think you could create vxlan on top of public ip address, it should be a problem.

2 - Should I use the public or private hetzner vlan interfaces to create the proxmox cluster? My expectation is that both should work, but since they are using the same physical nic, I might as well use the physical public interface.
Not related to sdn, but I'll personnaly use private network (for security). in proxmox5.4, with corosync multicast, it shoud be mandatory. but with proxmox6, you could create cluster on public ip.

3 - On the documentation, I read:
Code:
auto vmbr0
iface vmbr0 inet static
address 192.168.0.1/24
gateway 192.168.0.254
bridge-ports eno1
bridge-stp off
bridge-fd 0
mtu 1500


source /etc/network/interfaces.d/*
Where is the eno1 identifier coming from? Is that the name of my physical public interface?

oh, it's just the default setup when you install proxmox with cd iso on your own server.
hertzner setup is a little bit differents.

With vxlan, you don't care anyway to have a vmbr0.
With the sdn, when your create the vxlan zone, you just need to specify the adress of differents vxlan endpoints. (your proxmox hosts public ips for example). and it'll simply be routed through your enp0s31f6 interface


4 - I like to edit files rather than messing around with the GUI. After I modify /etc/network/interfaces, what is the command to run on the host to reload the config? I have found a multitude of ways of achieving that and I really don't want to reboot the host every time.
For sdn, you really need to use the api or the gui.
This will create config file in /etc/pve/sdn/zones.cfg && /etc/pve/sdn/vnets.cfg
(you could create them manually if you want)

then when you reload sdn config (through gui, api, or pvesh set /cluster/sdn),

this will generated a /etc/network/interfaces.d/sdn file locally on each host, then it's reloading it with "ifreload -a". (that's why you need ifupdown2).
That's works fine without any reboot.




If vxlan is possible on top of hertzer public ip, I think it should be very easy to setup the sdn
 
Hello

I'm trying to test this new feature but I can't seem to be able to configure it properly.
I have 3 test nodes on a cluster and after I create a vxlan zone and a vnet, when I apply the changes I get "error" on all 3 nodes and an empty /etc/network/interfaces.d/sdn file.

Here is some info:

root@node1:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve2
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-network-perl: 0.4-6
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-7
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

root@node1:~# cat /etc/pve/sdn/zones.cfg
vxlan: zone1
peers 192.168.100.10 192.168.100.11 192.168.100.12
mtu 1450

root@node1:~# cat /etc/pve/sdn/vnets.cfg
vnet: vnet1
tag 100
zone zone1
vlanaware 0

root@node1:~# cat /etc/network/interfaces.d/sdn
#version:36
root@node1:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp1s0 inet manual

iface enp9s0 inet static
address 192.168.122.10/24
gateway 192.168.122.1

auto vmbr26
iface vmbr26 inet static
address 192.168.100.10/24
bridge-ports enp1s0
bridge-stp off
bridge-fd 0
mtu 1500

source /etc/network/interfaces.d/*
root@node1:~# ifquery -c -a
auto lo
iface lo inet loopback

auto vmbr26
iface vmbr26 inet static [pass]
bridge-ports enp1s0 [pass]
bridge-fd 0 [pass]
bridge-stp no [pass]
mtu 1500 [pass]
address 192.168.100.10/24 [pass]

root@node1:~# pvesh set /cluster/sdn/
Use of uninitialized value $upid in pattern match (m//) at /usr/share/perl5/PVE/Tools.pm line 1068.
Use of uninitialized value $upid in concatenation (.) or string at /usr/share/perl5/PVE/Tools.pm line 1082.
unable to parse worker upid ''
Use of uninitialized value $upid in pattern match (m//) at /usr/share/perl5/PVE/Tools.pm line 1068.
Use of uninitialized value $upid in concatenation (.) or string at /usr/share/perl5/PVE/Tools.pm line 1082.
unable to parse worker upid ''
UPID:node1:00007C12:0003F4A4:5EE3592B:reloadnetworkall::root@pam:

I would appreciate any help. Thank you.
 

Attachments

  • ifreload.txt
    7.9 KB · Views: 7
@Claudiu Popescu

Hi, thanks for testing !

"peers 192.168.100.10 192.168.100.11 192.168.100.12"

It need to be comma separated list

Code:
peers 192.168.100.10,192.168.100.11,192.168.100.12

I'll add a check on this, and also add some errors message in the task log.

edit: Seem than the gui allow spaces or commas. I'll change the code to handle both.
 
Last edited:
Hello

That was it, it's fixed now.
I checked the documentation again and I saw the commas now. I can't believe how I've missed that.

@Claudiu Popescu

about the "pvesh set /cluster/sdn" warning, do you use last packages version from no-subscription repository ?
It should be already fixed

#pveversion -v ?

It was in the first quote in my previous post.
Yes, I have the latest packages from no-subscription repo.
Also, now with the IPs list fixed it doesn't show the warnings anymore.

Thank you for your work.
 
Hello

That was it, it's fixed now.
I checked the documentation again and I saw the commas now. I can't believe how I've missed that.



It was in the first quote in my previous post.
Yes, I have the latest packages from no-subscription repo.
Also, now with the IPs list fixed it doesn't show the warnings anymore.

Thank you for your work.
Great ! I have prepared a patch to handle list with space too, as the gui control don't forbid it.
 
@Marcin Kubica

Hi,

yes I'm planning to add nat too, and also simple isolated/routed vnets.

I'm also working to implement subnet/ip management for vm,

from there I think it could be easy to add nat option by subnet source,
directly in proxmox firewall


iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE

(and later maybe implement dhcp on vnet too)
 
@spirit

I'm right now using VXLAN in a mesh configuration with 3 hosts. It seems that whenever a host crashes the network connections between the rest of the hosts seem to reset and blip momentarily (even though the traffic should be flowing directly between them without any effect). Is there a setting to help prevent this from happening?
 
@spirit

I'm right now using VXLAN in a mesh configuration with 3 hosts. It seems that whenever a host crashes the network connections between the rest of the hosts seem to reset and blip momentarily (even though the traffic should be flowing directly between them without any effect). Is there a setting to help prevent this from happening?

mmm... do you mean if you have host1,host2,host3 , and host1 is crashing, communication between host2-host3 is reset/hanging too ?
I'm not aware of this, I will do tests.

Does it happen only on crash or also on clean shutdown ?

The vxlan handling is really done by the kernel vxlan driver, I don't think I can change that, but maybe it's a bug.


bgp-evpn plugin could workaround this (it's also using vxlan), but the setup is more complex for just a simple vxlan tunnel.

Thanks for the report, I'll keep you in touch
 
mmm... do you mean if you have host1,host2,host3 , and host1 is crashing, communication between host2-host3 is reset/hanging too ?
I'm not aware of this, I will do tests.

Does it happen only on crash or also on clean shutdown ?

The vxlan handling is really done by the kernel vxlan driver, I don't think I can change that, but maybe it's a bug.


bgp-evpn plugin could workaround this (it's also using vxlan), but the setup is more complex for just a simple vxlan tunnel.

Thanks for the report, I'll keep you in touch

It seems to be on host crash, not on a normal reboot of the host. Perhaps one way to test this is to have a 3 node cluster and then cause one of them to kernel panic.
 
It seems to be on host crash, not on a normal reboot of the host. Perhaps one way to test this is to have a 3 node cluster and then cause one of them to kernel panic.
I'll try with a kernel panic (I have tried only with a poweroff, badly removed the power cable)
 
@BobMccapherey

I have tested with kernel panic with "echo c > /proc/sysrq-trigger", and I can't reproduce. No packet loss, no connection hang.
can you share your configurations ?

I have OPNsense running on two different VMs in an HA CARP configuration.

The VMs are connected to custnet (which is connected to the OPNsense VMs as the LAN interface) with the below configuration:

Code:
root@virt-slc-90:/etc/pve/sdn# cat vnets.cfg
vnet: custnet
    tag 200000
    zone intvxlan

vnet: cryptnet
    tag 314159
    zone intvxlan

root@virt-slc-90:/etc/pve/sdn# cat zones.cfg
vxlan: intvxlan
    peers 10.170.214.10,10.170.57.66,10.170.55.90,10.170.208.98
    mtu 1450

root@virt-slc-90:/etc/pve/sdn# cat /etc/network/interfaces.d/sdn 
#version:12

auto cryptnet
iface cryptnet
    bridge_ports vxlan_cryptnet
    bridge_stp off
    bridge_fd 0
    mtu 1450

auto custnet
iface custnet
    bridge_ports vxlan_custnet
    bridge_stp off
    bridge_fd 0
    mtu 1450

auto vxlan_cryptnet
iface vxlan_cryptnet
    vxlan-id 314159
    vxlan_remoteip 10.170.214.10
    vxlan_remoteip 10.170.57.66
    vxlan_remoteip 10.170.208.98
    mtu 1450

auto vxlan_custnet
iface vxlan_custnet
    vxlan-id 200000
    vxlan_remoteip 10.170.214.10
    vxlan_remoteip 10.170.57.66
    vxlan_remoteip 10.170.208.98
    mtu 1450

The WAN interface on OPNsense is connected directly to the network interface with multiple gateways configured in a load-balancing configuration.

The OPNsense VMs are not running on the host that crashes, so they should remain unaffected.
 
Well, connecting 2 remote Proxmox boxes, in example using vxlan tunnels, works really fine.
So far so good.

I was wondering: what options are there to add a security/crypto layer?
Obviously I mean without external devices/apps.
 
I was wondering: what options are there to add a security/crypto layer?
Obviously I mean without external devices/apps.

Currently, I don't have out of the box solution for vxlan encryption.

But, macsec support should come soon in ifupdown2, and will allow encryption on top of vxlan interfaces.
kernel have already support for it, but I'm waiting for an official support in ifupdown2. (before the end of the year)

https://bootlin.com/blog/network-traffic-encryption-in-linux-using-macsec-and-hardware-offloading/
https://developers.redhat.com/blog/...ifferent-solution-to-encrypt-network-traffic/
 
Thank you very much for your answer.

I fear that MACsec is not an option since it is a layer 2 protocol.
My 2 boxes seat in different datacenters.
I'll try the ipsec way.
If I find an elegant solution I could post a small guide.
 
  • Like
Reactions: sneer

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!