I was referring to Proxmox/Ceph install video, which clearly specifies that they recommend fast NVMe SSD drives for OS and Ceph monitors/journals are installed there?
Fast NVMes are always nice; if you have the available slots and dont care about the cost- sure, go nuts. In my experience, the only load generated on the boot device is the logs. Do no mix journal and boot devices, thats an invitation for disaster. I'd use the NVMes for journal, and wouldnt lose any sleep on boot devices; just mirror whatever you pick.
you only need 8-16GB for boot devices; I dont know how many slots you have available in your chassis but I'd use 2 for boot devices (32gig or smaller SSDs would suit you fine, and they're cheap enough that you can pick up a few cold spares too.) NVMEs for journal, no more then 5 SSDs per journal disk although this isnt a hard rule; I've seen configs with 12OSD:1 Journal used without issue. you'll need to benchmark your setup to find the optimal config. you need approx 5GB/OSD in journal space. Fill the rest of your slots with OSDs. Since your load is HV, you will most likely have a triple PG pool, so your available space will be your total OSD space / 3; plan your hardware accordingly.
As for HBA- dont use a raid controller for ceph. full stop. marking disks as RAID0 is a kludge and will noticeably impact performance because the raid controller will bridge the block size from disk native to raid volume native. even when the block sizes are the same, it still has a performance impact and can result in unpredicable OSD behavior. I'd suggest https://www.broadcom.com/products/storage/host-bus-adapters/sas-9300-8i for SAS HBA; its not only faster but is much better attuned to modern SSDs.