A question about write-back cache not being implemented in proxmox yet: effectively what does this mean? I am effectively using "ceph-volume create" from the cli right now. I would like to know if you are only referring to UI support?
Nothing. You want to measure what your application will do when using your disk subsystem, and tune it to the best it will do. If that is insufficient you will need to rethink the approach. Bear in mind there is caching at various levels of the subsystem- for example, the default value for ram buffer per OSD is 4G; you can tune that upwards, as an example. You can also create seperate pools by device class, and do your own storage management in guest. Would effectively achieve the same thing as the cache layer in VSAN, just with manual promotion/demotions.
Hi,
That still does not answer the question. @spirit said "it is not implemented in proxmox yet".. and it was a reference to write-back cache. I am asking if a UI feature is missing to do that.
It turns out, more or less it is missing at a core level. The write-back cache spirit is referring to is implemented in the kernel, and the proxmox 8.4.1 kernel (6.8.12-10-pve) currently does NOT have that support for client side caching built in. That could dramatically speed things up - clearly they added it to the rbd module for a reason. So "Nothing" is not correct. I sense this is what spirit is referring to. Maybe I can achieve the same result with librbd instead of rbd.ko? Always better to use the kernel.
I am heading towards creating a high performance pool with little to no replication and high bluestore cache (and one day write-back cache) so thanks for that I believe it is a good suggestion. But I don't think doing storage management in guest will allow the machine to migrate when I need to do hypervisor maintenance.
As for Solana, I am merely speaking to the official requirements. They are "loosy goosy" and more or less constantly changing. I need to figure out how to gather the metrics and current performance on vsan, this is true. I just have not had to do that before. For me that is a debugging step that has not been necessary. This thread is particulary intersting from a vsan migration perspective. So I am trying to get as close to a similar situation as I can. vsan does write-back caching and heavy buffering; my sense is that proxmox holds data in cache until it is assured all of the replication is completed whereas vsan may "hand that off" more quickly. Whatever it does, it works, and write-back cache is a bit difficult to get working on proxmox right now.
Its literally a dialog box. I think you mean hot tier cache- which is true but pointless. The storage subsystem either works or it doesnt for your use case. getting caught up in how the filesystem works isnt likely going to produce anything useful to you.