Help with SDN

brephil

Member
Apr 24, 2021
21
0
6
54
I am trying to setup VXLAN using the SDN feature as outlined here:
https://pve.proxmox.com/pve-docs/chapter-pvesdn.html#pvesdn_zone_plugin_simple

My requirement is to basically use a 10G interface between the two nodes for better VM to VM transfer rates (NAS replica). Maybe there are better ways?

But in my two node cluster I followed the directions outlined in the page and can not get the Node1_vm to ping my Node2_vm.

I checked the lower level interfaces and they are both showing good links full duplex at 10Gbits, so I know the physical stuff is good.

The only slight deviation is my test vms are Unbuntu desktops, so the network setup is a little different, but not sure that matters. The static IPs are set, mtu 1450 is set, and there is no gw.

First is VXLAN the way to go? if yes, then what should I look into on this setup to get it working?
 
Node01:
/etc/pve/sdn/vnets.cfg
Code:
vnet: myvet1
        zone vxlan01
        tag 100000

/etc/pve/sdn/zones.cfg
Code:
vxlan: vxlan01
        peers 192.168.0.1,192.168.0.2
        ipam pve
        mtu 1450

/etc/network/interfaces
Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

iface ens6f1 inet manual

iface ens6f0 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves eno1 eno2
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet static
        address 10.0.4.10/24
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
#WAN

auto vmbr1
iface vmbr1 inet manual
        ovs_type OVSBridge
#vWAN

auto vmbr2
iface vmbr2 inet static
        address 192.168.0.1/24
        gateway 192.168.0.254
        bridge-ports ens6f0
        bridge-stp off
        bridge-fd 0
        mtu 1500

source /etc/network/interfaces.d/*



/etc/network/interfaces.d/sdn
Code:
#version:1

auto myvet1
iface myvet1
        bridge_ports vxlan_myvet1
        bridge_stp off
        bridge_fd 0
        mtu 1450

auto vxlan_myvet1
iface vxlan_myvet1
        vxlan-id 100000
        vxlan_remoteip 192.168.0.2
        mtu 1450

Node02:
/etc/pve/sdn/vnets.cfg
Code:
vnet: myvet1
        zone vxlan01
        tag 100000

/etc/pve/sdn/zones.cfg
Code:
vxlan: vxlan01
        peers 192.168.0.1,192.168.0.2
        ipam pve
        mtu 1450

/etc/network/interfaces
Code:
auto lo
iface lo inet loopback

iface enp4s0f0 inet manual

iface enp4s0f1 inet manual

iface enp4s0f2 inet manual

iface enp4s0f3 inet manual

iface enp5s0f1 inet manual

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 10.0.4.20/24
        bridge-ports enp4s0f0
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet manual
        ovs_type OVSBridge
#vWAN

auto vmbr2
iface vmbr2 inet static
        address 192.168.0.2/24
        gateway 192.168.0.254
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        mtu 1500

source /etc/network/interfaces.d/*


/etc/network/interfaces.d/sdn
Code:
#version:1

auto myvet1
iface myvet1
        bridge_ports vxlan_myvet1
        bridge_stp off
        bridge_fd 0
        mtu 1450

auto vxlan_myvet1
iface vxlan_myvet1
        vxlan-id 100000
        vxlan_remoteip 192.168.0.1
        mtu 1450
 
By removing the gateway from the vmbr2 on both nodes it seemed to work... I can now ping vm to vm.

1620412513027.png

I want a dedicated link between nodes (basically NIC to NIC, no switch between) and as long as both use the same subnet no need for any gateway. But should it have worked with it set?
 
Last edited:
your config seem to be correct.

I don't see why it's not working with gateway...(are you sure that config was correctly reloaded ?)

if 192.168.0.1 can connect to 192.168.0.2, the vxlan tunnel should works out of the box.

(with the gateway defined, are you able to ping from 192.168.0.1 to 192.168.0.2 ?)
 
I do not know why it worked either... But was able to add it back and it's working
, maybe something was not quite loaded as you said.

BTW, does this only work with MTU 1500? I noticed when I go to 9000 on the interfaces and bridge (and 8950 for the VXLAN zone and vms) all hell breaks loose (NOT pretty, loose connection to node 02 once SDN is applied). Is there any way to build this with a jumbo frame?
 
Last edited:
it's work with mtu 9000 without problem, but you still need 50 lower bytes on the vnet than the physical interface.

I'm running vxlan in production with mtu=9200 on my switchs + mtu=9200 on my ethX on hypervisor + mtu=9000 on vnet

but it should works with 9000 on physical interface + 8950 on the vxlan too.

can you send result of #ip addr with your mtu 9000 when it's not working ?
 
This is from Node 02:
Code:
6: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master vmbr2 state UP group default qlen 1000
    link/ether 3c:ec:ef:1b:c6:dc brd ff:ff:ff:ff:ff:ff
12: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:ec:ef:1b:c6:dc brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.2/24 scope global vmbr2
       valid_lft forever preferred_lft forever
    inet6 fe80::3eec:efff:fe1b:c6dc/64 scope link
       valid_lft forever preferred_lft forever
13: vxlan_myvet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master myvet1 state UNKNOWN group default qlen 1000
    link/ether 32:4a:7a:2f:9a:3a brd ff:ff:ff:ff:ff:ff
14: myvet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default qlen 1000
    link/ether 32:4a:7a:2f:9a:3a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::304a:7aff:fe2f:9a3a/64 scope link
       valid_lft forever preferred_lft forever

As soon as I switch the vmbr2 to 9000 on Node01 I loose access to Node02 from the UI.

Here is Node 01 [when it works at 1500]
Code:
4: ens6f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master vmbr2 state UP group default qlen 1000
    link/ether 0c:c4:7a:b7:4e:28 brd ff:ff:ff:ff:ff:ff
10: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:c4:7a:b7:4e:28 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 scope global vmbr2
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:feb7:4e28/64 scope link
       valid_lft forever preferred_lft forever
12: myvet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default qlen 1000
    link/ether f2:cd:33:a2:31:d8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::8a6:70ff:fef2:4961/64 scope link
       valid_lft forever preferred_lft forever
63: vxlan_myvet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master myvet1 state UNKNOWN group default qlen 1000
    link/ether ae:6f:a9:53:e7:f0 brd ff:ff:ff:ff:ff:ff

Here is Node01 when it does not work (as loose access to Node02 from UI):
Code:
4: ens6f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master vmbr2 state UP group default qlen 1000
    link/ether 0c:c4:7a:b7:4e:28 brd ff:ff:ff:ff:ff:ff
10: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:c4:7a:b7:4e:28 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 scope global vmbr2
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:feb7:4e28/64 scope link
       valid_lft forever preferred_lft forever
12: myvet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default qlen 1000
    link/ether f2:cd:33:a2:31:d8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::8a6:70ff:fef2:4961/64 scope link
       valid_lft forever preferred_lft forever
63: vxlan_myvet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master myvet1 state UNKNOWN group default qlen 1000
    link/ether ae:6f:a9:53:e7:f0 brd ff:ff:ff:ff:ff:ff
 
Last edited:
all seem to be fine at host level.

As soon as I switch the vmbr2 to 9000 on Node01 I loose access to Node02 from the UI.
are you sure that's it's not a physical switch configuration problem where jumbo frame is not enable ?
are you are to do a "ping -Mdo -s 8950 192.168.0.2" from 192.168.0.1
 
all seem to be fine at host level.


are you sure that's it's not a physical switch configuration problem where jumbo frame is not enable ?
are you are to do a "ping -Mdo -s 8950 192.168.0.2" from 192.168.0.1
Hi yes... there is no switch this is direct NIC to NIC.

There does not seem to be any response from the ping:
root@pvenode01:~# ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.
64 bytes from 192.168.0.2: icmp_seq=1 ttl=64 time=0.152 ms
64 bytes from 192.168.0.2: icmp_seq=2 ttl=64 time=0.110 ms
64 bytes from 192.168.0.2: icmp_seq=3 ttl=64 time=0.144 ms
^C
--- 192.168.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 41ms
rtt min/avg/max/mdev = 0.110/0.135/0.152/0.020 ms
root@pvenode01:~# ping -Mdo -s 8950 192.168.0.2
PING 192.168.0.2 (192.168.0.2) 8950(8978) bytes of data.
---- just hangs
 
--- 192.168.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 41ms
rtt min/avg/max/mdev = 0.110/0.135/0.152/0.020 ms
root@pvenode01:~# ping -Mdo -s 8950 192.168.0.2
PING 192.168.0.2 (192.168.0.2) 8950(8978) bytes of data.
---- just hangs

and you have mtu 9000 on ens6f0 && vmbr2 on both server ?
 
Well this is very odd... I removed the SDN config and rebuilt it. did not change the underlying settings they all were the same (vbrs etc). But now jumbo frames are working.

BTW, the MTU setting was 9000 across the board last time as well, I've only been reverted to get the link working from node 01 and 02.

I think it could've been some something to do with the SDN part not taking the first time (prob due to some bad setting i had in the interfaces), so removing and adding it back helped a lot.

Update: well after a min of working... issue is coming back. So ignore all that, let me try disabing SDN and just get the nodes talking again.
 
Last edited:
So maybe I do not understand something... I have 2 10Gbps nic cards (they each have two ports)

NIC 1: Port1 and Port2
NIC 2: Port1 and Port2

I do not have money to by a 10Gbps RJ-45 switch for these two nodes. So I thought I could use a standard CAT 6e cable between Nic1 Port1 and Nic2 Port1. Using ethtool it shows good links full/duplex 10Gbps speed from both sides. With that I thought I could light up N1 P1 and N2 P1 and assign IPs to each in the same subnet (i.e. 192.168.0.1 and 192.168.0.2).

But it would not work... :(

What I found out to get this working I need both ports enabled, connected, and IP'd assigned to get this working.

Like this on each node:
#port 0
auto enp5s0f0
iface enp5s0f1 inet static
address 192.168.1.2/24

#port 1
auto enp5s0f1
iface enp5s0f1 inet static
address 192.168.0.2/24


Then and only then can I get the nodes "talking" (I think that has been my issue all along, as I had this setup like this and was changing MTU on the pair, but not on all 4!).

BTW, I did make a crossover cable, but did not seem to help when first started this. Willing to try it again if it will work. Really not a lot of info on doing 10GB RJ45 crossover connections, so not too sure. I was just happy to see the link with standard cable so figured no need to make the crossover cable work.

Can someone explain this? Is this really required or am I missing something? I know with a switch in the middle the total ports between the NICs and the switch is 4, so is this why I need 4 vs just 2 to get this working? Or am I'm just a moron ?

I think I would be much easier to invest in a switch :)./... But I'm a cheap bastard.
 
TW, I did make a crossover cable, but did not seem to help when first started this. Willing to try it again if it will work. Really not a lot of info on doing 10GB RJ45 crossover connections, so not too sure. I was just happy to see the link with standard cable so figured no need to make the crossover cable work.

cross-over cables was for 100mbits only, where all wires was not used, since gigabit we have Auto-MDIX specification, doing auto cross-over if needed as all wires pairs are used

I do not have money to by a 10Gbps RJ-45 switch for these two nodes. So I thought I could use a standard CAT 6e cable between Nic1 Port1 and Nic2 Port1. Using ethtool it shows good links full/duplex 10Gbps speed from both sides. With that I thought I could light up N1 P1 and N2 P1 and assign IPs to each in the same subnet (i.e. 192.168.0.1 and 192.168.0.2).

But it would not work... :(

What I found out to get this working I need both ports enabled, connected, and IP'd assigned to get this working.

I realy don't known what is not working, maybe it's a nic bug... you should be able to use only 1port of 1 nic on first node 192.168.0.1 and on other node 1 port of 1 nic with 192.168.0.2, plug the cable, it shoud ping. (if ethtool show you that it's connected, it's ok for the physical cable layer)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!