Configure network interface for VLANs based on lo?

mnovi

New Member
Mar 9, 2021
18
0
1
47
Hello,

I have 3 servers with Proxmox forming a cluster. Each is connected directly to the other two with 10G NIC, meaning they are in a full mesh network topology. To get connectivity between them I followed the guide on Full Mesh Network for Ceph (even though I don't use Ceph there) and used Routed Setup, based on FRR, OpenFabric protocol. The connection works without any problem, the nodes can ping and communicate with others.

On top of the OpenFabric mesh network (in this case) I would like to create another virtual network (only available on these 3 servers) with multiple VLANs. It would be intended for VMs from all three servers, so they can communicate and are not isolated to specific server only. I was already trying to do this with bridge, but it's not possible since the IP for access to OpenFabric network is assigned to lo interface. Therefore it cannot be added to bridge. To overcome this problem I figured out I can create VXLAN and add lo as it's backend device. This works until I add vxlan interface to the bridge, but afterwards I'm unable to ping VXLAN peers, so it's not a good solution.

I'm stuck at this point, as I believe there must be a way to create virtual network interface on top of lo, but don't know how. The VXLAN in this case is probably just a complication, but it's the only solution which I have almost successfully implemented. I would be very happy if someone would give me any info on what should I do to get the virutal network+VLANs working.

Thanks in advance!
 
Any idea on what should I do to get the virutal network+VLANs working together with FRR/OpenFabric?
 
@vesalius First of all thanks for your reply.

Regarding RSTP Loop Setup with OVS, I actually did that + established VLANs in GNS3 test project with Alpine containers. The machines (in GNS3 they were PCs, but in Proxmox environment they would be VMs) connected to bridge could ping each other and every server. When I tried to do the same on Proxmox servers I couldn't get VLANs working as in GNS3, so I also created thread on the forum: https://forum.proxmox.com/threads/ovs-bridge-full-mesh-with-vlan-support.109404/ ... I disabled FW and everything that could impact routing, but due to the lack of my knowledge on how to debug such cases, this prevented further investigations. If there is any information on how to debug network/routing/sysctl settings in this case, I would really appreciate it.

For broadcast setup I didn't even try to establish it as there is no any fallback in case that one of the links fail. I know it can be established together with routed setup, but I don't have (yet) 2x 10G NIC to create bonds for each link (or maybe I don't understand correctly how this should be done). It may become an option If I don't succeed with setting VLANs in any other way.

Batman looks promising and I will test it. The problem I have with VXLAN in comparison to VLAN is that I don't believe it has any real value in such case as full mesh network with Proxmox servers - it just takes out 50B of packet which causes additional overhead and the VLAN range of possible IDs is already more than enough for my case. Also with Batman the MTU is limited to 1500B as I saw in the other thread. I have successfully established VXLAN in GNS3 with FRR/OpenFabric (via lo interface), but if there is a way to get VLAN working instead of VXLAN, this would be my preferred solution then. The OpenFabric also has support for MTU of 9000B.

In the end my goal is to get separate networks on full mesh network, so that the VMs can communicate directly between servers, without the need of additional HW switch (and possibly slower 1G network). I prefer that protocol allows for having a fallback in case of link failure and that VLAN is working via that network in Proxmox.
 
I can't help further on OVS as I have only used it briefly, sorry.

Let me preface this next question by saying that I understand the innate desire to optimize and squeeze out every bit of speed possible. I subconsciously feel the same way.

Specifically on VXLAN: What are you transferring across those links (and for how long at maximal speed) that subtraction of 50B will have any discernable difference to you during your day?

Also, and I apologize if you already know this, have you considered that many nics and switches will allow you to set a Maximum MTU well above 9000 and therefore make the 50B subtraction a net change of zero from your current config? I do not believe @spirit suggestions in the linked thread used the FRR/OpenFabric (via lo interface) route, btw.
 
I understand that my idea of getting VLAN working is probably just a wasted time in comparison to a working enviroment that I would already have with VXLAN. But as you wrote, the point here is that I get to the root of problem, so I can understand why something work (or doesn't) as I want. I don't believe any book on networking will get me anything closer, as this is a specific problem, so that's why I'm asking here. I have to mention I'm very happy that at least you are answering to my questions/thoughts :)

The data which will be going via links:
  • iSCSI/NFS traffic for storage access (1 node has iSCSI server for DB, 1 node has NFS server for files; not perfect, but servers have different specs, so Ceph is out of question there)
  • K8s communication between cluster nodes
I think there won't be any longer running transfer (requiring most of the bandwidth), only for a file storage there will be files with max size around 1GB, which will be delivered to end user via http. But that file size is rare, the average will be around few MB.

The NICs I'm using are Intel X540 and if I'm not wrong I've seen they have a HW MTU limit a bit over 9000B. In that case the 50B is almost nothing, so I agree with you there.

I still need to do some tests regarding FRR/OpenFabric+VXLAN and Batman+VXLAN on actual servers (not just in GNS3) and I will report results later.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!