You don't exactly NEED raid controllers for ceph to function.
- You need raid controllers if you want more disks in your system than the mainboards controller is offering
- You can use a raid controller to benefit from its battery-backed RW-caches. Do note that regular hard drives already have its own memory cache, just on a much smaller scale than what controllers have. They really only exist because non-SSD drives are extremely slow...
Also, while we tend to call them raid controllers, they really actually arent. They are actually just additional disk controllers, which just happen to implement some raid levels (which ceph neither needs nor wants). With ceph you dont use the raid functionality in any way shape or form - this single-disk RAID0 is really just a trick to present individual disks to ceph while still making use of the controller's cache (which the JBOD mode typically doesn't allow for).
Very nicely put MO! Ceph indeed diminishes the need to have RAID setup. I myself use combination of RAID and Expander card to have JBOD for Ceph. I am not sure if battery backed cache is needed since Ceph can heal itself pretty good. This is was my primary concern when i was introduced to Ceph. I ran several tests to simulate complete Ceph node failure but never had any data issue.
If we are talking about Caching through Proxmox for VMs such as writeback, write through etc. of course they are little different story. They really have nothing to do with RAID cache we are talking about here.