Cluster cold start timing problems

athompso

Member
Sep 13, 2013
127
8
18
I've got an 8-node PVE 3 cluster where each node is also part of the CEPH storage pool.
Anytime I cold-boot the entire cluster (e.g. after work on the building's power system), the fact that the nodes are not all identical means that they take varying amounts of time to come online.
I have a number of VMs set to automatically boot, but the auto-boot fails for any of a couple of reasons:
1. lack of quorum (not enough hosts have finished booting yet)
2. lack of CEPH RBD blocks (again, not enough OSDs have finished booting yet)
3. I/O starvation as CEPH goes crazy trying to rebalance the pool before the last few units finish booting

I know how to solve #3 (assuming I did a controlled shutodwn): ceph set noout.
But for #1 and #2, I'm at a loss.

Is there any way to tell PVE to wait until all (or almost all) the nodes - particularly CEPH OSDs - are back up before trying to auto-boot any VMs?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!