I just wanted to validate my plan for setting up a new PVE 2.1 cluster and make sure that I understand all the requisite pieces. I have the following nodes:
I will be able to fence all three nodes using an APC outlet user, but I do not have shared storage.
The plan would be to set up two DRBD volumes per the wiki's recommendation, where vm01 will run VMs from the first volume and vm02 will run VMs from the second volume.
Each of the nodes has a public/external NIC (eth0) and a private/internal NIC (eth1 using a 10.x.x.x net). I'll force all of the Corosync multicast traffic and DRBD over the private interface and have the bridge and subsequent VMs use the external interface so they will be publicly available.
QUESTIONS
- vm01 - beefy node that will host VMs
- vm02 - beefy node that will host VMs
- vm03 - low-powered node that will only be a member to obtain proper quorum
I will be able to fence all three nodes using an APC outlet user, but I do not have shared storage.
The plan would be to set up two DRBD volumes per the wiki's recommendation, where vm01 will run VMs from the first volume and vm02 will run VMs from the second volume.
Each of the nodes has a public/external NIC (eth0) and a private/internal NIC (eth1 using a 10.x.x.x net). I'll force all of the Corosync multicast traffic and DRBD over the private interface and have the bridge and subsequent VMs use the external interface so they will be publicly available.
QUESTIONS
- I'll need DRBD in order to have any chance at supporting HA where hosts can auto-migrate if there's a failure on either vm01 or vm02, right?
- What's the downside if I completely leave DRBD out of the equation and use only local storage, just losing the possibility to auto-failover VMs?
- For future expansion purposes, would my options would be to pick one of the following?
- Add nodes in pairs with a similar, but completely separate DRBD setup as vm01 and vm02, and then have their shared storage limited to only those two nodes
- Forget DRBD and add dedicated shared storage (SAN or whatever) that all of the nodes could use
- Will it be easier with this setup to just use KVM and ignore OpenVZ? My understanding is that if you want to use both, you'd need separate LVM volumes dedicated to each?
- I know you can migrate OpenVZ containers without using DRBD or shared storage, but you won't be able to have any sort of auto-migrate in a failure situation without them, right?