I have a lab with a recent CPU on the proxmox (v. 9) host backed by ceph storage. Proxmox boots from nVME. Ceph (19.2.3) runs on separate hardware on 10Gbit links, and for the lab we're just using consumer grade sata SSDs. All in all it works very well. I am now benchmarking.
I installed a Windows VM and CrystalDiskMark as a first simple benchmark to see what I end up with. With direct or write through the results aren't great, wirespeed almost. Random read/write is fairly poor. When I enable write back however, performance is through the roof. 20GB(yte!) sequential read and write, 300MB random single thread read write.
I assuming this is a result of caching at host level in memory, but my understanding is limited so far. I have read barriers (e.g. ext4 journal, or NTFS (I am guessing)) in the OS helps to avoid file system corruption, but data loss is possible.
20GB is 16 x faster than the 10Gbit link. So I'm speculating as to how much data could theoretically be lost in a scenario where a large sequential write is terminated by an abrubt VM or hypervisor failure? Given my basic test with CrystalDiskMark, I'd assume quite a lot. I'm wondering how it's possible to sustain writes at many times wirespeed, without hitting some kind of cache limit.
Much appreciated!
I installed a Windows VM and CrystalDiskMark as a first simple benchmark to see what I end up with. With direct or write through the results aren't great, wirespeed almost. Random read/write is fairly poor. When I enable write back however, performance is through the roof. 20GB(yte!) sequential read and write, 300MB random single thread read write.
I assuming this is a result of caching at host level in memory, but my understanding is limited so far. I have read barriers (e.g. ext4 journal, or NTFS (I am guessing)) in the OS helps to avoid file system corruption, but data loss is possible.
20GB is 16 x faster than the 10Gbit link. So I'm speculating as to how much data could theoretically be lost in a scenario where a large sequential write is terminated by an abrubt VM or hypervisor failure? Given my basic test with CrystalDiskMark, I'd assume quite a lot. I'm wondering how it's possible to sustain writes at many times wirespeed, without hitting some kind of cache limit.
Much appreciated!
Last edited: