Network beginner questions

Discussion in 'Proxmox VE: Networking and Firewall' started by st6f9n, Aug 9, 2019.

  1. st6f9n

    st6f9n New Member

    Joined:
    Feb 15, 2019
    Messages:
    2
    Likes Received:
    0
    Hello,

    I'm a total beginner to Proxmox/Ceph, I have the following hardware available
    (no more changes are possible):

    6 x Supermicro Server: 2x10GB NICs, 4x1GB NICs, 2x256GB NVMEs, 2x2TB NVMEs, 256GB RAM
    2 x Switch: Netgear ProSAFE XS728T

    Here my ideas:

    XS728T is not a stacking switch, so I can't use MLAG. I see have only the choice
    between LAG at one switch or active-backup bonding with two switches,
    because no other bonding mode is possible (why ?). I plan to have 3 separated networks
    (3 VLANs): Corosync Cluster Network, Ceph Storage Network, Proxmox Client Network

    Proxmox Client Network:
    * External Switch #3 (outside of our responsibility): LAG with 2x1GB NICs

    Corosync Cluster Network:
    * Switch #1: 1x 1GB NIC (passive RRP mode)
    * Switch #2: 1x 1GB NIC (passive RRP mode)

    Ceph Storage Network:

    Solution No.1:
    * Switch #1: LAG with 2x10GB NICs (Switch #2 is also configured for this)
    * Switch #2: 1x1GB NIC (active-backup bonding with LAG from Switch #1,
    for a short time solution)
    * When Switch #1 fails, I will manually change the LAG with 2x10GB NICs to Switch #2

    Solution No.2:
    * Switch #1: 1x10GB NIC (active-backup bonding)
    * Switch #2: 1x10GB NIC (active-backup bonding)

    I think Solution No.1 will be better for latency, Solution No.2 better for redundancy.


    Any suggestions ? Thanks !

    Best regards

    Stefan
     
  2. st6f9n

    st6f9n New Member

    Joined:
    Feb 15, 2019
    Messages:
    2
    Likes Received:
    0
    It's a bit confusing:
    * Proxmox public network: For the VMs (?)
    * Ceph public network ("highly recommended")
    * Ceph cluster network (optional): Does it make sense concerning my hardware configuration to make a separate network (10 GB Switch) ?
    * Corosync cluster network: Could it be same like "Ceph cluster network" ? Is it necessary concerning my hardware configuration ?
     
    #2 st6f9n, Aug 11, 2019
    Last edited: Aug 11, 2019
  3. aaron

    aaron Member

    Joined:
    Jun 3, 2019
    Messages:
    55
    Likes Received:
    3
    Good idea, just configuring two links (ring in the config file), one one each NIC-Switch combination is fine. https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvecm_redundancy shows a sample config file for this.

    I would not do Solution1 because if the Ceph network is falling back to 1GB your Ceph cluster will most likely be unusable pretty fast.
    The claim of better latency in solution 1 is something I am not sure about. More Bandwidth yes, but the latency doesn't get better with double 10GBit. It would get better with 100GBit.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice