PVE 5 HA cluster with iSCSI multipath shared storage

Discussion in 'Proxmox VE: Installation and configuration' started by Rais Ahmed, Sep 15, 2017.

  1. Rais Ahmed

    Rais Ahmed Member

    Joined:
    Apr 14, 2017
    Messages:
    43
    Likes Received:
    4
    Hi,
    Creating an environment here is the details.
    HP blade servers 3 node HA cluster
    SAN iSCSI multipath shared storage
    2 10gb NIC making them bond
    my question is
    is it enough to create cluster bond (2 10gb NIC) or is there any recommendations to avoid bottleneck/latency issues.
    Thanks
     
  2. micro

    micro Member
    Proxmox Subscriber

    Joined:
    Nov 28, 2014
    Messages:
    58
    Likes Received:
    12
    Are you planing to separate the storage (iSCSI) network, the cluster communication network and the general network (for the VMs) ? If not you should.
     
    Rais Ahmed likes this.
  3. Rais Ahmed

    Rais Ahmed Member

    Joined:
    Apr 14, 2017
    Messages:
    43
    Likes Received:
    4
    thankyou for your input, can you please share the details how can i configure seperate network for cluster communication while using bond
     
  4. Rais Ahmed

    Rais Ahmed Member

    Joined:
    Apr 14, 2017
    Messages:
    43
    Likes Received:
    4
    currently i going to implement iSCSI on same single network
     
  5. micro

    micro Member
    Proxmox Subscriber

    Joined:
    Nov 28, 2014
    Messages:
    58
    Likes Received:
    12
    It is recommended to do it on a separate networks, but of course you can do it also on a single network but with separate vlans and then bonding.
     
  6. Rais Ahmed

    Rais Ahmed Member

    Joined:
    Apr 14, 2017
    Messages:
    43
    Likes Received:
    4
    got it, how can i configure seperate cluster network with vlans & bonding please help.
    there are 3 networks
    1 for iSCSI shared storage
    2 for VM's & nodes
    3 for cluster network

    iSCSI shared storage & VM, Nodes network will be same, and seperate network for cluster corosync network.
    what do you say?
     
  7. micro

    micro Member
    Proxmox Subscriber

    Joined:
    Nov 28, 2014
    Messages:
    58
    Likes Received:
    12
  8. latosec

    latosec New Member

    Joined:
    Jul 12, 2015
    Messages:
    12
    Likes Received:
    3
    I solved all my latency/bottleneck problems with:
    -4 10gb nic on san
    -4 10gb nic on every proxmox (1 bond with 2 nic for communication with san and 1 bond with 2 nic for cluster/vm networking/backup)
    -2 1gb nic on every proxmox (1 bond for traffic to internet)
    We use 2 10gb switch and 2 1gb switch with mc-lag
    Every proxmox network is a vlan (cluster, storage, backup, internet)
    Every customer have a private vlan behind a pfsense firewall (those are vm too)
    I've a lot on traffic between vms (we host entire infrastructure for my clients)
    Actually i've a san with 24x1Tb SSD and all work very good.
     
    rockyli and Rais Ahmed like this.
  9. Rais Ahmed

    Rais Ahmed Member

    Joined:
    Apr 14, 2017
    Messages:
    43
    Likes Received:
    4
    thankyou latosec.
     
  10. Xahid

    Xahid Member

    Joined:
    Feb 13, 2014
    Messages:
    75
    Likes Received:
    10
    You need two extra LAN Cards on each node (either 1G or 10G)
    Use 2X 10G (Bond or Load Balancer) for iSCSI Storage Netowrk
    Use 1G for Separate Cluster Network
    Use 1G for Management Network or Internet connectivity.
     
    Rais Ahmed likes this.
  11. Rais Ahmed

    Rais Ahmed Member

    Joined:
    Apr 14, 2017
    Messages:
    43
    Likes Received:
    4
    I have only 2 X 10G NIC, and i have created a cluster with vlan & bonding. created 1 network for iSCSI + VMs and 1 network for cluster communication, created with vlan
     
  12. Rais Ahmed

    Rais Ahmed Member

    Joined:
    Apr 14, 2017
    Messages:
    43
    Likes Received:
    4
    now keeping it on observation, hope for the best :D
     
  13. Xahid

    Xahid Member

    Joined:
    Feb 13, 2014
    Messages:
    75
    Likes Received:
    10
    I gave you advice for ideal network topology with easier manageability.
     
  14. Rais Ahmed

    Rais Ahmed Member

    Joined:
    Apr 14, 2017
    Messages:
    43
    Likes Received:
    4
    yes you are right that was ideal network but i didn't have multiple NIC's i have to work with what i have available. :oops:
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice