Hello
I need to set up a 3 node HA cluster using Proxmox. This will use local disks in each of the nodes for a Ceph cluster as well. I am going to use VE 4.2.
So my question is really about the preferred method to do the networking for this. I have a layer 3 Cisco SG300-52 switch and will want to configure several VLANs both for running Proxmox/Ceph and for the virtual machines.
Thoughts so far are that I need the following separate networks for Proxmox/Ceph
1. Proxmox Management (2 x NICs)
2. Proxmox Cluster communication (2 x Nics)
3. Ceph replication (6 x NICs)
4. All other VLANs for VMs (4 x NICs)
There seems to be two schools of thought for this when people create hypervisors and clusters.
Would you bond all the NICs together, on switch and Proxmox, and then share everything out internally via OpenvSwitch? Using some sort of bandwidth management to control how much each VLAN, or collection of VLANs, can have within OpenvSwitch.
or
Would you create separate linux bonds for the 3 Proxmox/Ceph VLANs listed above, then use an OpenvSwitch for other bond for the VLANs for the VMs?
Obviously the first option gives much greater resilience and you can lose many NICs and the whole system keeps running, even though it may be slowed down. I have done this before in Hyper-V but am totally new to OpenvSwitch and have limited exposure to Proxmox running just a single server so far with no VLANs.
With the second option it gives you failover for management and cluster communication but if you lost both NICs in each bond then you are stuck.
I would appreciate peoples thoughts on this. And if anyone has any links to to some walkthroughs for this I would be grateful if you could post them please?
Many Thanks
I need to set up a 3 node HA cluster using Proxmox. This will use local disks in each of the nodes for a Ceph cluster as well. I am going to use VE 4.2.
So my question is really about the preferred method to do the networking for this. I have a layer 3 Cisco SG300-52 switch and will want to configure several VLANs both for running Proxmox/Ceph and for the virtual machines.
Thoughts so far are that I need the following separate networks for Proxmox/Ceph
1. Proxmox Management (2 x NICs)
2. Proxmox Cluster communication (2 x Nics)
3. Ceph replication (6 x NICs)
4. All other VLANs for VMs (4 x NICs)
There seems to be two schools of thought for this when people create hypervisors and clusters.
Would you bond all the NICs together, on switch and Proxmox, and then share everything out internally via OpenvSwitch? Using some sort of bandwidth management to control how much each VLAN, or collection of VLANs, can have within OpenvSwitch.
or
Would you create separate linux bonds for the 3 Proxmox/Ceph VLANs listed above, then use an OpenvSwitch for other bond for the VLANs for the VMs?
Obviously the first option gives much greater resilience and you can lose many NICs and the whole system keeps running, even though it may be slowed down. I have done this before in Hyper-V but am totally new to OpenvSwitch and have limited exposure to Proxmox running just a single server so far with no VLANs.
With the second option it gives you failover for management and cluster communication but if you lost both NICs in each bond then you are stuck.
I would appreciate peoples thoughts on this. And if anyone has any links to to some walkthroughs for this I would be grateful if you could post them please?
Many Thanks