PVE 5 HA cluster with iSCSI multipath shared storage

Rais Ahmed

Active Member
Apr 14, 2017
50
4
28
37
Hi,
Creating an environment here is the details.
HP blade servers 3 node HA cluster
SAN iSCSI multipath shared storage
2 10gb NIC making them bond
my question is
is it enough to create cluster bond (2 10gb NIC) or is there any recommendations to avoid bottleneck/latency issues.
Thanks
 
Are you planing to separate the storage (iSCSI) network, the cluster communication network and the general network (for the VMs) ? If not you should.
 
  • Like
Reactions: Rais Ahmed
thankyou for your input, can you please share the details how can i configure seperate network for cluster communication while using bond
 
It is recommended to do it on a separate networks, but of course you can do it also on a single network but with separate vlans and then bonding.
 
got it, how can i configure seperate cluster network with vlans & bonding please help.
there are 3 networks
1 for iSCSI shared storage
2 for VM's & nodes
3 for cluster network

iSCSI shared storage & VM, Nodes network will be same, and seperate network for cluster corosync network.
what do you say?
 
I solved all my latency/bottleneck problems with:
-4 10gb nic on san
-4 10gb nic on every proxmox (1 bond with 2 nic for communication with san and 1 bond with 2 nic for cluster/vm networking/backup)
-2 1gb nic on every proxmox (1 bond for traffic to internet)
We use 2 10gb switch and 2 1gb switch with mc-lag
Every proxmox network is a vlan (cluster, storage, backup, internet)
Every customer have a private vlan behind a pfsense firewall (those are vm too)
I've a lot on traffic between vms (we host entire infrastructure for my clients)
Actually i've a san with 24x1Tb SSD and all work very good.
 
I solved all my latency/bottleneck problems with:
-4 10gb nic on san
-4 10gb nic on every proxmox (1 bond with 2 nic for communication with san and 1 bond with 2 nic for cluster/vm networking/backup)
-2 1gb nic on every proxmox (1 bond for traffic to internet)
We use 2 10gb switch and 2 1gb switch with mc-lag
Every proxmox network is a vlan (cluster, storage, backup, internet)
Every customer have a private vlan behind a pfsense firewall (those are vm too)
I've a lot on traffic between vms (we host entire infrastructure for my clients)
Actually i've a san with 24x1Tb SSD and all work very good.
thankyou latosec.
 
got it, how can i configure seperate cluster network with vlans & bonding please help.
there are 3 networks
1 for iSCSI shared storage
2 for VM's & nodes
3 for cluster network

iSCSI shared storage & VM, Nodes network will be same, and seperate network for cluster corosync network.
what do you say?

You need two extra LAN Cards on each node (either 1G or 10G)
Use 2X 10G (Bond or Load Balancer) for iSCSI Storage Netowrk
Use 1G for Separate Cluster Network
Use 1G for Management Network or Internet connectivity.
 
  • Like
Reactions: Rais Ahmed
You need two extra LAN Cards on each node (either 1G or 10G)
Use 2X 10G (Bond or Load Balancer) for iSCSI Storage Netowrk
Use 1G for Separate Cluster Network
Use 1G for Management Network or Internet connectivity.
I have only 2 X 10G NIC, and i have created a cluster with vlan & bonding. created 1 network for iSCSI + VMs and 1 network for cluster communication, created with vlan
 
I have only 2 X 10G NIC, and i have created a cluster with vlan & bonding. created 1 network for iSCSI + VMs and 1 network for cluster communication, created with vlan
I gave you advice for ideal network topology with easier manageability.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!