Hi,
I'm working on setting up a new LAB environment for PVE, trying to test Ceph + Proxmox's clusters capability for a setup.
We have the following nodes:
7x E3 v2, 32GB RAM, with 2x1G LACP bonded down to a multi chassis switch
4x E5 v1, with 8x400GB SSD with 2x1GB LACP bonded down to multi chassis switch and 4x10GbE for Ceph network
4x E5 v3 with 2x1GB LACP bonded down, 2x10GbE for ceph, and 1x10GbE for VM traffic.
I was looking at the documentation for the network model, but I wasn't able to fully determine if it's possible to really have granular control without resorting to elaborate configurations.
Really what I want is:
4x E5 v3 nodes have the 2x10GbE be dedicated just for ceph storage traffic, these nodes will strictly be acting as rdb clients, not servers. the 2x1GbE lacp will be used for management and also cluster network traffic. The remaining 10GbE will be on 2 different vlans strictly for VM traffic.
7x E3 v2 nodes will have multiple vlan's trunked down. Let's say, Cluster network, management/proxmox network, VM network, and also a way to access the ceph rdb's ( these will be hosting very light VM's )
4x E5 v1's will be dedicated to ceph, so I think network configuration here is a bit easier. 2x1Gb LACP with Cluster network/management, and 4x10GbE strictly in Ceph
I already have 4 subnets allocated for this.
My main question is, is it necessary to create a bridge for each vlan, or do we really only need the bridge for if VM's will live on that vlan?
EDIT: I've just read that by default VM migration will happen over the defined cluster network, is there a way to change this behavior? Say we want this to happen over the single 10GbE interface for VM Traffic?
I'm working on setting up a new LAB environment for PVE, trying to test Ceph + Proxmox's clusters capability for a setup.
We have the following nodes:
7x E3 v2, 32GB RAM, with 2x1G LACP bonded down to a multi chassis switch
4x E5 v1, with 8x400GB SSD with 2x1GB LACP bonded down to multi chassis switch and 4x10GbE for Ceph network
4x E5 v3 with 2x1GB LACP bonded down, 2x10GbE for ceph, and 1x10GbE for VM traffic.
I was looking at the documentation for the network model, but I wasn't able to fully determine if it's possible to really have granular control without resorting to elaborate configurations.
Really what I want is:
4x E5 v3 nodes have the 2x10GbE be dedicated just for ceph storage traffic, these nodes will strictly be acting as rdb clients, not servers. the 2x1GbE lacp will be used for management and also cluster network traffic. The remaining 10GbE will be on 2 different vlans strictly for VM traffic.
7x E3 v2 nodes will have multiple vlan's trunked down. Let's say, Cluster network, management/proxmox network, VM network, and also a way to access the ceph rdb's ( these will be hosting very light VM's )
4x E5 v1's will be dedicated to ceph, so I think network configuration here is a bit easier. 2x1Gb LACP with Cluster network/management, and 4x10GbE strictly in Ceph
I already have 4 subnets allocated for this.
My main question is, is it necessary to create a bridge for each vlan, or do we really only need the bridge for if VM's will live on that vlan?
EDIT: I've just read that by default VM migration will happen over the defined cluster network, is there a way to change this behavior? Say we want this to happen over the single 10GbE interface for VM Traffic?
Last edited: