If I wanted to trick VPS provider I would look does he use ARP. In that case I would use L2 VPN (wireguard is only L3): Cloudzy VPS eth0 -> bridge with VPN -> VPN L2 -> Proxmox -> bridge with VPS -> VPS
5 MON - A.I suggest to use this number only on really huge setup.
1 MON - on single ceph machine.
2 MON - no quorum
3 MON - regular setup
4 MON - no quorum?
Try to lower MON setup to 3.
I suggest you take a look at these links
https://www.osris.org/article/2019/03/01/ceph-osd-site-affinity
https://ceph.io/en/news/blog/2015/crushmap-example-of-a-hierarchical-cluster-map/
In maintenance time I set noout norebalance norecover flags before OSD/server shutdown. It stops from moving data around others OSD.
In some Ceph talks was mentioned that single HDD can impact all cluster event HDD SMART will not show any evidence of coming HDD death. So you must track of disk...