LACP Bonding Questions

Discussion in 'Proxmox VE: Networking and Firewall' started by bensode, Jun 13, 2019 at 15:36.

  1. bensode

    bensode Member

    Joined:
    Jan 9, 2019
    Messages:
    40
    Likes Received:
    3
    Hello. I'm looking to setup my Proxmox 5.4 cluster with 2 bonded ports for VMs/Management and 2 bonded ports for integrated CEPH storage. In looking over the Network Config docs there is a reference to running the cluster network from the bonded link requiring active-passive mode. Which one of the the defined modes in the doc is active-passive? I read through them all and even word searched the page but the only hit on passive is the mention of active-passive requirement. Did they mean active-backup?

    Thanks!
     
  2. RokaKen

    RokaKen New Member
    Proxmox Subscriber

    Joined:
    Oct 11, 2018
    Messages:
    18
    Likes Received:
    4
    I assume you are referring to section 3.3.6 of the Admin Guide: Network Configuration which contains:
    This is referring to the PVE (corosync) cluster and indeed should be active-backup. There was a good explanation posted recently, but I can't find it at the moment. The CEPH cluster network can be a LACP bond if you desire.
     
  3. bensode

    bensode Member

    Joined:
    Jan 9, 2019
    Messages:
    40
    Likes Received:
    3
    Is this link also responsible for the migrations live and otherwise? I don't see any breakdowns of where that is positioned or if it can be re-arranged.
     
  4. RokaKen

    RokaKen New Member
    Proxmox Subscriber

    Joined:
    Oct 11, 2018
    Messages:
    18
    Likes Received:
    4
    For the PVE cluster network, the Cluster Manager page provides more information. It can be separated from the VM bridge/admin/client network (and running dual networks is recommended). A migration network for VMs is discussed at the end of that linked document.

    For the CEPH cluster network, it can be separated from the CEPH public network. This Ceph Documentation shows and describes the traffic over each network. If you only have limited 10G interfaces, it is better to have the CEPH public and cluster running over 10G than to have either CEPH public or CEPH cluster running on less than 10G interfaces due to latency.
     
  5. bensode

    bensode Member

    Joined:
    Jan 9, 2019
    Messages:
    40
    Likes Received:
    3
    Awesome thanks for the light reading over the weekend. We currently run on 10G switches with a single connection for the vmbr0 and another single 10g connection for ceph. I've added to new connections (total of 4) split between two 10G switches and was looking to configure half for ceph other half for client access/management. I guess we'll need another set for corosync.
     
  6. RokaKen

    RokaKen New Member
    Proxmox Subscriber

    Joined:
    Oct 11, 2018
    Messages:
    18
    Likes Received:
    4
    Your're welcome. If that solves your issue, please mark it from your OP.

    That would be best current practice, however it can be a 1G -- Corosync multicast doesn't require anything more.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice