That won't work, as we don't allow it.
On its own, not a nice argument
Create a new Ceph pool and if the "Add Storage" checkbox is enabled, a matching storage will be added to the Proxmox VE config. It will be of the type RBD (rados block device), the block device functionality on top of Ceph.
That's what I had done, too, it was mostly in the process of testing the import of VMs from other hypervisors that I felt a file system might save some copying around, especially with 'huge' disks that are actually quite sparse (I am so used to adding TB disks to VM and then not use them, relying on sparsity and trimming to keep them small).
It is designed with VMs in mind. CephFS has a few things that make it unsuitable for VM storage. One, storing QCOW2 files for example add another layer that is not needed.
That could have made it into your excellent documentation (perhaps with more data), because it's not as obvious to me as an RHV/oVirt GlusterFS user: GlusterFS has not gained fame as a speed devil, but unless you're talking Infiniband, another layer without a kernel/userland transition doesn't sound that expensive.
oVirt/RHV actually puts another block/chunk layer on top of the file system, but that's mostly to ensure some distribution of the otherwise monolithic disk files. And then it's also because oVirt/RHV was originall designed for SAN storage.
The major one though is that if an MDS (metadata server, providing the FS functionality) fails and a standby MDS needs to take over, it can take a bit until the CephFS is available again. On large ones it might even take a few minutes. Not something that can be used for VM images
On one hand that's another welcome insight, on the other 'minutes' certainly sounds disastrous in a storage context.
So does putting the node which runs the active MDS into maintenance transfer that role to a standby MDS without such an expensive arbitration? Does starting a standby server reduce the currently active into a standby? (I guess I should start reading the Ceph documentation...)
Again, I may be rather spoiled by how tolerant Gluster is to single node storage disruptions, but then my motivation to come to Proxmox and Ceph is the lack of any future for Gluster and oVirt now that all downstream commercial products are gone.