New to Proxmox: how to setup network

Extin

New Member
Oct 10, 2025
1
0
1
Hi all, a real newbie question . New to proxmox as Platform, not to virtualisation.

I am planning to build a 3 node cluster with SEPH , but I am getting stuck on the best way to setup the network.

Hardware:
3x Supermicro's, each have
2x 10GB RJ-45
2x 100GB SFP28
2x 1G RJ-45

DC:
1x 4port 10GB switch
6x uplink 1GB from DC GW
1x 4port 100GB SFP switch ( optional )

IP:
2x /27 ranges
9 IP's for management / IPMI etc.

Whats best this : How many network adapters?

1. I have setup the cluster / corosync network on the 1GB. Since I dont have enough switch ports left over for this in the DC, I am thinking of using direct links. No bridging or bonding on this. ( so using 2x 1G per machine )
Is this possible or is a switch mandatory?

2. VMBR0 For the VM ( public ) 2x in OVS bonding to DC GW . ( Thinking that routed setup is best for assigning the /27 -> https://pve.proxmox.com/wiki/Network_Configuration )
Management also connected to this bridge, is that possible?
I also connected VMBR0 to a pfsense firewall + DHCP to VMBR1 ( for cross communication between VM host A and other VM on host B that is not public reachable )

3. VMBR 2: 2 ports 100G Routed Setup (with Fallback) ( so without switch ) for storage cluster OR 1port 100G to switch ( I have only one SFP 100 switch )
So in the first situation the SFP are used, and when using a switch I have one SFP per machine left over )

Do I see this correct? Missing anything? Am I thinking too difficult here?

At this moment I am having the 3 machines at home. I want to setup the cluster, and then move to the DC and provide them with the public IP's.
Or is this way of implementing giving a lot of issues?
 
Last edited:
1. I have setup the cluster / corosync network on the 1GB. Since I dont have enough switch ports left over for this in the DC, I am thinking of using direct links. No bridging or bonding on this. ( so using 2x 1G per machine )
Is this possible or is a switch mandatory?
It's possible to do this without switch, which basically is a full-mesh network then. You can see our wiki how-to for options to configure that, while it's written for Ceph in mind, the basics do not change so it will also work for corosync.
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
2. VMBR0 For the VM ( public ) 2x in OVS bonding to DC GW . ( Thinking that routed setup is best for assigning the /27 -> https://pve.proxmox.com/wiki/Network_Configuration )
Management also connected to this bridge, is that possible?
The PVE management API daemon listens on all local IPs by default. On its own it won't produce much traffic, and things that might produce more traffic (like VM migrations or replications) can be configured to be moved through a specific network in the datacenter options. Most of the time it's basically just relevant to separate heavy (IO) traffic from the latency sensitive corosync cluster network and to give the ceph cluster network enough bandwidth, which you both do here, so the third 10G network for public/DC traffic sounds good.

3. VMBR 2: 2 ports 100G Routed Setup (with Fallback) ( so without switch ) for storage cluster OR 1port 100G to switch ( I have only one SFP 100 switch )
So in the first situation the SFP are used, and when using a switch I have one SFP per machine left over )
A routed setup might give you a bit more throughput, but that depends on the CPUs and disks available. Going full-mesh can have a bit more redundancy, as the NICs of the server are in your critical path anyway and avoiding an extra SPOF component can only help, but it is a trade for a bit more complexity, as handling a simple switch is always a bit easier. So IMO would mostly depend on if you can use the extra 100G port for anything useful.
Do I see this correct? Missing anything? Am I thinking too difficult here?
The underlying ideas are rather solid and avoid common pitfalls (like having corosync and ceph/backup traffic on same network), how well it works out depends on more things (workloads running on the PVE guests, more HW details, ...)

At this moment I am having the 3 machines at home. I want to setup the cluster, and then move to the DC and provide them with the public IP's.
Or is this way of implementing giving a lot of issues?
A network change is naturally always a bit tricky to get right, but it sounds like you would not be running production yet while evaluating this at home, so IMO I see no real downside here if you already got the HW at home anyway. On the contrary, debugging stuff at home can be much simpler as one is physically near the HW and has simply more comfort and less pressure most of the time.
But sure, some things (like cooling so temps, or power) are different in a DC, but normally they really should be better there than your home... :)