Hi all, a real newbie question . New to proxmox as Platform, not to virtualisation.
I am planning to build a 3 node cluster with SEPH , but I am getting stuck on the best way to setup the network.
Hardware:
3x Supermicro's, each have
2x 10GB RJ-45
2x 100GB SFP28
2x 1G RJ-45
DC:
1x 4port 10GB switch
6x uplink 1GB from DC GW
1x 4port 100GB SFP switch ( optional )
IP:
2x /27 ranges
9 IP's for management / IPMI etc.
Whats best this : How many network adapters?
1. I have setup the cluster / corosync network on the 1GB. Since I dont have enough switch ports left over for this in the DC, I am thinking of using direct links. No bridging or bonding on this. ( so using 2x 1G per machine )
Is this possible or is a switch mandatory?
2. VMBR0 For the VM ( public ) 2x in OVS bonding to DC GW . ( Thinking that routed setup is best for assigning the /27 -> https://pve.proxmox.com/wiki/Network_Configuration )
Management also connected to this bridge, is that possible?
I also connected VMBR0 to a pfsense firewall + DHCP to VMBR1 ( for cross communication between VM host A and other VM on host B that is not public reachable )
3. VMBR 2: 2 ports 100G Routed Setup (with Fallback) ( so without switch ) for storage cluster OR 1port 100G to switch ( I have only one SFP 100 switch )
So in the first situation the SFP are used, and when using a switch I have one SFP per machine left over )
Do I see this correct? Missing anything? Am I thinking too difficult here?
At this moment I am having the 3 machines at home. I want to setup the cluster, and then move to the DC and provide them with the public IP's.
Or is this way of implementing giving a lot of issues?
I am planning to build a 3 node cluster with SEPH , but I am getting stuck on the best way to setup the network.
Hardware:
3x Supermicro's, each have
2x 10GB RJ-45
2x 100GB SFP28
2x 1G RJ-45
DC:
1x 4port 10GB switch
6x uplink 1GB from DC GW
1x 4port 100GB SFP switch ( optional )
IP:
2x /27 ranges
9 IP's for management / IPMI etc.
Whats best this : How many network adapters?
1. I have setup the cluster / corosync network on the 1GB. Since I dont have enough switch ports left over for this in the DC, I am thinking of using direct links. No bridging or bonding on this. ( so using 2x 1G per machine )
Is this possible or is a switch mandatory?
2. VMBR0 For the VM ( public ) 2x in OVS bonding to DC GW . ( Thinking that routed setup is best for assigning the /27 -> https://pve.proxmox.com/wiki/Network_Configuration )
Management also connected to this bridge, is that possible?
I also connected VMBR0 to a pfsense firewall + DHCP to VMBR1 ( for cross communication between VM host A and other VM on host B that is not public reachable )
3. VMBR 2: 2 ports 100G Routed Setup (with Fallback) ( so without switch ) for storage cluster OR 1port 100G to switch ( I have only one SFP 100 switch )
So in the first situation the SFP are used, and when using a switch I have one SFP per machine left over )
Do I see this correct? Missing anything? Am I thinking too difficult here?
At this moment I am having the 3 machines at home. I want to setup the cluster, and then move to the DC and provide them with the public IP's.
Or is this way of implementing giving a lot of issues?
Last edited: