SDN and networking best practice

koniambo

New Member
Jul 4, 2025
17
0
1
Hi,

Just a few questions about the SDN and network in a proxmox cluster

Can management network be in SDN ?

I had a cluster with 2 bond of 2 interfaces on each node with one bound for storage with a ceph cluster and the other one for the the management, corosync and cluster network maybe it's to much for on bond ?

Thanks
 
to much depends on how much your clusternetwork traffic does.
usually not to much

but when you saturate that bond you risk outage of corosync. so if possible prioritize corosync vlan on your switch and or gurantee some bandwidth to it. 1mbit is enough, all we need is latency not bandwidth but that we need at all times

only questions is backups then. sure you probably do em in the off hours where no one cares, but keep in mind in case of a restore. - again will depend how much your network is saturated normaly. if you have a high saturation all the time you may wanna think of a dedi. transport network for migrations outside of ceph , backups and restores and such things

if its not saturated at all then noone cares and its all good, at this point your bonds are probably already overkill

as for your sdn question, what exactly do you mean. simple run of the mill vlan setup - yea sure - one of the most basic setups in SDN, just one step over local
 
Last edited:
  • Like
Reactions: koniambo
At first thanks for your answer !

I'am pretty new to proxmox and this is a testing cluster, sorry if it's not very clear

but when you saturate that bond you risk outage of corosync. so if possible prioritize corosync vlan on your switch and or gurantee some bandwidth to it. 1mbit is enough, all we need is latency not bandwidth but that we need at all times
Then no need to have a dedicate network for corosync only prioritazing needed, good!

only questions is backups then. sure you probably do em in the off hours where no one cares, but keep in mind in case of a restore. - again will depend how much your network is saturated normaly. if you have a high saturation all the time you may wanna think of a dedi. transport network for migrations outside of ceph , backups and restores and such things

We didn't questioning ourself about backup yet. Do you know what are the network best practice there ?

if its not saturated at all then noone cares and its all good, at this point your bonds are probably already overkill

The bonded interface are needed for redundancy reason in our infra

as for your sdn question, what exactly do you mean. simple run of the mill vlan setup - yea sure - one of the most basic setups in SDN, just one step over local
I had a "classic" network conf looking like this but I saw that via SDN it would be more simple to manage vlan (We're gonna add plenty vlan and I don't really want to add it into each node as below). The network management to access to the cluster via GUI is vlan 30, is it a good idea to delete the whole /network/interfaces conf to put them Vnets (with vlan 30) or some vlan has to be outside of sdn vnets for safety reason

Screenshot from 2025-09-10 10-28-23.png
 
yea so, corosync doesnt need to be dedicated but it depends on your switches and nic. if they can make sure you always have low latency even if the bond is saturated than yes you dont need to seperate it.
i would however stress test this as i have seen switches dont do that exactly or even nic start congestion issues even when they shouldt have free bandwidth. thats why the common sentiment is to have a seperate nic for that. not that its always needed but its the most simple advice that will certainly always work no matter what.

about backup, well as i said, depends how saturated you are and how critical that network is.
best pratice on an unlimted budget ofc is a seperate transport network for migrations into your ceph storage and backups (write and restore)
this will guarantee that the normal network traffic of your host will be unaffected at all times no matter what you do

in practice this is usually overkill. still its nice to have and not have to think about.


as for SDN, its a matter of taste and need there is also a better way if you dont wanna use SDN to use VLan
that would be by simply making the Bridge VLan aware and add the interface carry that Vlans
then you define the range the bridge should be able to handle

so in your exmaple it would be lets say vmbr0 should do anything
vmbr0 gets a bridge interface bond0 and is vlan aware

nothing else todo on each node other than keep the interfacename of vmbr0 identical.


in each VM you can then specify a vlan ID in each adapter which will be passed as native.
OR
you specify nothing in which case bond0 native is passed as native to the VM but also the VLans will so the VM could access each VLAN itself
with an OS that can handle that

ofc its all a matter of taste.
the SDN has simply the advantage that we can do several things at once right on the network level (like using DHCP, IPAM, DNS register etc)
you can then also use the proxmox firewall to manage inter vlan traffic (even tough thats pretty much useless in a VLAN SDN, thats more relevant in SDN networks where proxmox really does do some routing)

So it mainly depends on your needs. if youre like me and you use static assignment anyway and dont need the other features its pretty much potato potata.

In total SDN are probably cleaner, but VLAN Aware bridge is well.. less overhead. depends on your setup and needs.