New to Proxmox: how to setup network

Extin

New Member
Oct 10, 2025
2
0
1
Hi all, a real newbie question . New to proxmox as Platform, not to virtualisation.

I am planning to build a 3 node cluster with SEPH , but I am getting stuck on the best way to setup the network.

Hardware:
3x Supermicro's, each have
2x 10GB RJ-45
2x 100GB SFP28
2x 1G RJ-45

DC:
1x 4port 10GB switch
6x uplink 1GB from DC GW
1x 4port 100GB SFP switch ( optional )

IP:
2x /27 ranges
9 IP's for management / IPMI etc.

Whats best this : How many network adapters?

1. I have setup the cluster / corosync network on the 1GB. Since I dont have enough switch ports left over for this in the DC, I am thinking of using direct links. No bridging or bonding on this. ( so using 2x 1G per machine )
Is this possible or is a switch mandatory?

2. VMBR0 For the VM ( public ) 2x in OVS bonding to DC GW . ( Thinking that routed setup is best for assigning the /27 -> https://pve.proxmox.com/wiki/Network_Configuration )
Management also connected to this bridge, is that possible?
I also connected VMBR0 to a pfsense firewall + DHCP to VMBR1 ( for cross communication between VM host A and other VM on host B that is not public reachable )

3. VMBR 2: 2 ports 100G Routed Setup (with Fallback) ( so without switch ) for storage cluster OR 1port 100G to switch ( I have only one SFP 100 switch )
So in the first situation the SFP are used, and when using a switch I have one SFP per machine left over )

Do I see this correct? Missing anything? Am I thinking too difficult here?

At this moment I am having the 3 machines at home. I want to setup the cluster, and then move to the DC and provide them with the public IP's.
Or is this way of implementing giving a lot of issues?
 
Last edited:
1. I have setup the cluster / corosync network on the 1GB. Since I dont have enough switch ports left over for this in the DC, I am thinking of using direct links. No bridging or bonding on this. ( so using 2x 1G per machine )
Is this possible or is a switch mandatory?
It's possible to do this without switch, which basically is a full-mesh network then. You can see our wiki how-to for options to configure that, while it's written for Ceph in mind, the basics do not change so it will also work for corosync.
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
2. VMBR0 For the VM ( public ) 2x in OVS bonding to DC GW . ( Thinking that routed setup is best for assigning the /27 -> https://pve.proxmox.com/wiki/Network_Configuration )
Management also connected to this bridge, is that possible?
The PVE management API daemon listens on all local IPs by default. On its own it won't produce much traffic, and things that might produce more traffic (like VM migrations or replications) can be configured to be moved through a specific network in the datacenter options. Most of the time it's basically just relevant to separate heavy (IO) traffic from the latency sensitive corosync cluster network and to give the ceph cluster network enough bandwidth, which you both do here, so the third 10G network for public/DC traffic sounds good.

3. VMBR 2: 2 ports 100G Routed Setup (with Fallback) ( so without switch ) for storage cluster OR 1port 100G to switch ( I have only one SFP 100 switch )
So in the first situation the SFP are used, and when using a switch I have one SFP per machine left over )
A routed setup might give you a bit more throughput, but that depends on the CPUs and disks available. Going full-mesh can have a bit more redundancy, as the NICs of the server are in your critical path anyway and avoiding an extra SPOF component can only help, but it is a trade for a bit more complexity, as handling a simple switch is always a bit easier. So IMO would mostly depend on if you can use the extra 100G port for anything useful.
Do I see this correct? Missing anything? Am I thinking too difficult here?
The underlying ideas are rather solid and avoid common pitfalls (like having corosync and ceph/backup traffic on same network), how well it works out depends on more things (workloads running on the PVE guests, more HW details, ...)

At this moment I am having the 3 machines at home. I want to setup the cluster, and then move to the DC and provide them with the public IP's.
Or is this way of implementing giving a lot of issues?
A network change is naturally always a bit tricky to get right, but it sounds like you would not be running production yet while evaluating this at home, so IMO I see no real downside here if you already got the HW at home anyway. On the contrary, debugging stuff at home can be much simpler as one is physically near the HW and has simply more comfort and less pressure most of the time.
But sure, some things (like cooling so temps, or power) are different in a DC, but normally they really should be better there than your home... :)
 
  • Like
Reactions: Extin
Thank you both for the quick response @t.lamprecht and @Johannes S .

To be complete, also the specs :

Supermicro SuperServer SYS-122C-TN 12 hot-swap 2.5” NVMe/SATA/SAS bays, 3 PCIe 5.0 x16/x8 slots and 2 PCIe 5.0 x16 AIOM slots
2x Intel Xeon 6517P - 16C/32T, 3.2Ghz/4.0Ghz Turbo, 190W, 72MB
8x 64GB DDR-5 6400Mhz Registered ECC ( 512GB )
1x Dual 100G QSFP28 Network Adapter OCP 3.0, Broadcom BCM57508
1x Dual 10G RJ45 Network Adapter OCP 3.0, Intel X710-AT2
1x Quad 1G/2.5G RJ45 Network Adapter PCI-e x8, Intel I226-T4
12x DC PM9D3a, 1.92TB , PCIe5.0x4, 2.5 inch Enterprise NVMe
2x 512GB NVME M.2 Gen4 6.9GB/s read - 5GB/s write boot drives

I have worked it out in a diagram, because I think I still dont understand something correctly.

Setup 1
As you can see in the first attachment the cluster and seph are directly connected. Also a uplink to vmbr0 i provided for management.
However, the WIKI says `one high bandwidth (10+ Gpbs) network for Ceph (public) traffic between the ceph server and ceph client storage traffic. Depending on your needs this can also be used to host the virtual guest traffic and the VM live-migration traffic.`

I dont understand this : Public and the migration traffic together.....And host the guest traffic?
How should I need to translate this.....

Based on the comment above

' things that might produce more traffic (like VM migrations or replications) can be configured to be moved through a specific network in the datacenter options. Most of the time it's basically just relevant to separate heavy (IO) traffic from the latency sensitive corosync cluster network and to give the ceph cluster network enough bandwidth, which you both do here, so the third 10G network for public/DC traffic sounds good.'

In that case setup 1 should need the 10G switch for the 3rd network as i dont have enough network ports ?
I prefered no switches and using DAC, but when the conclusion is correct and I cant do it without the extra network, I think I have another option also.

Setup 2
I also have a a 100G SFP switch. I could switch it with the 10G switch. 3 ports 100G and 1port breakout 25G.
See image 2.

Am i mixing things up now or do I see it correct? Could you clarify that ?
I am thinking setup 2 is maybe the best, and use the 10G links as uplink to DC.....do you agree on that ?

Thanks for your response.
 

Attachments

  • 2.png
    2.png
    213.9 KB · Views: 1
  • 1.png
    1.png
    86.4 KB · Views: 1
Last edited: