ceph-volume create is the command to create OSDs. it has nothing to do with how you use the filesystem.A question about write-back cache not being implemented in proxmox yet: effectively what does this mean? I am effectively using "ceph-volume create" from the cli right now. I would like to know if you are only referring to UI support?
What does "power hungry" mean in this context? do you have metrics? (block size, latency, potential iops per core, ?)am running an extremely power hungry application (Solana validator
so... why not do that?which at time has asked for configurations like "leave the accounts folder (now 500Gb) on a RAM drive".
why? you're already doing it, all you need to do is look at the disk IO statistics.So the exact specs of what is necessary is hard to come by
Nothing. You want to measure what your application will do when using your disk subsystem, and tune it to the best it will do. If that is insufficient you will need to rethink the approach. Bear in mind there is caching at various levels of the subsystem- for example, the default value for ram buffer per OSD is 4G; you can tune that upwards, as an example. You can also create seperate pools by device class, and do your own storage management in guest. Would effectively achieve the same thing as the cache layer in VSAN, just with manual promotion/demotions.A question about write-back cache not being implemented in proxmox yet: effectively what does this mean?
The UI calls the same tools as the cli. It makes no difference, except you can pass more arguments via cli if necessary.I am effectively using "ceph-volume create" from the cli right now. I would like to know if you are only referring to UI support?