I've got an 8-node PVE 3 cluster where each node is also part of the CEPH storage pool.
Anytime I cold-boot the entire cluster (e.g. after work on the building's power system), the fact that the nodes are not all identical means that they take varying amounts of time to come online.
I have a number of VMs set to automatically boot, but the auto-boot fails for any of a couple of reasons:
1. lack of quorum (not enough hosts have finished booting yet)
2. lack of CEPH RBD blocks (again, not enough OSDs have finished booting yet)
3. I/O starvation as CEPH goes crazy trying to rebalance the pool before the last few units finish booting
I know how to solve #3 (assuming I did a controlled shutodwn): ceph set noout.
But for #1 and #2, I'm at a loss.
Is there any way to tell PVE to wait until all (or almost all) the nodes - particularly CEPH OSDs - are back up before trying to auto-boot any VMs?
Anytime I cold-boot the entire cluster (e.g. after work on the building's power system), the fact that the nodes are not all identical means that they take varying amounts of time to come online.
I have a number of VMs set to automatically boot, but the auto-boot fails for any of a couple of reasons:
1. lack of quorum (not enough hosts have finished booting yet)
2. lack of CEPH RBD blocks (again, not enough OSDs have finished booting yet)
3. I/O starvation as CEPH goes crazy trying to rebalance the pool before the last few units finish booting
I know how to solve #3 (assuming I did a controlled shutodwn): ceph set noout.
But for #1 and #2, I'm at a loss.
Is there any way to tell PVE to wait until all (or almost all) the nodes - particularly CEPH OSDs - are back up before trying to auto-boot any VMs?