For what it is worth, you may wish to reconsider your design philosophy slightly? I looked at a similar project build ~1.5 years ago and my general feeling was that
-- Ceph was possible with a 'modest size' cluster (ie, 3-5 nodes of proxmox with ceph 'hyperconverged storage pool')
-- but the ceph performance was not 'great' until you got into a bigger sized deployment, period. Ceph really likes lots (!) of disks and lots of nodes for spreading out the load.
So instead of using ceph as the baseline build plan, we ended up with
-- 3 x nodes in a cluster [and building multiple parallel 3-node clusters as desired for more capacity. If needed; or maybe slight increase in cluster size, ie 3>4>5 node cluster size.]
-- local shared-nothing hardware raid storage
-- regular VM Backups to a separate shared NFS storage tank (nightly)
-- local HW Raid backed storage gave nice robust IO performance; this architecture was 'very simple' and 'just worked'
-- outages were really a non-issue, because - the servers had redundant PSU, redundant HDDs / HW raid. The real-world risks of failure/downtime due to (CPU RAM or other non-redundant component fault) was significantly lower than (added risk of outages due to increased complexity, higher confusion, greater maintenance impact, human error factors) - if the design was 'more complex and elegant' aka using ceph (or ZFS .. or DRBD .. also which were options I pondered).
the fact we can do zero-downtime VM migrations in share-nothing clusters makes it possible to do "VM re-balancing"
the fact that NFS shared storage "VM Backup Tank" is easy, works really well - makes 'disaster recovery" possible easily / with modest downtime.
clearly we don't have HA fault tolerance here, but again it is a matter of assessing the <actual risks> of your environment, the impact of <different build design on HA:Tolerance:fault:Risk:FailChance> and <what really is the bad outcome from a brief outage to a VM?>
Just my 2 cents!
Maybe ceph / or other things have changed tremendously in the last ~1.5 years - I am not certain - would be happy if others feel desire to comment!
Tim
------------------------------------------------------------
Tim Chipman
FortechITSolutions
http://FortechITSolutions.ca
"Happily using Proxmox to support client projects for nearly a decade"