This problem has been rumbling on a while, lots of trace messages about delays, ceph being "flakey", very slow VMs, kubernetes clusters taking 30 min to come up etc.
We migrated one VM with Kubernetes onto a little bit of NVME local-storage and at the moment, the view its that is has come alive.
We finally ran some tests with fio and it was observed the NVME that prox sits on is fast, very fast, 200MB/s, but anything on the HBA SAS drives are running so so slow, even for mechanical disk standards... 530kB/s to 1000kB/s read on a 8GB file on randwrite 4k, but yes they do pull 200MB/s on sequence reading.
My general question is, I fully appreciate spinning drives are slow on random reading, but should they be this slow <1000kB/s and what is the likely culprit? P440, the drives being mechanical etc?
The P440 its on latest fw 7 Sep 2022, infact the entire machine is as upto date as can be.
We migrated one VM with Kubernetes onto a little bit of NVME local-storage and at the moment, the view its that is has come alive.
We finally ran some tests with fio and it was observed the NVME that prox sits on is fast, very fast, 200MB/s, but anything on the HBA SAS drives are running so so slow, even for mechanical disk standards... 530kB/s to 1000kB/s read on a 8GB file on randwrite 4k, but yes they do pull 200MB/s on sequence reading.
My general question is, I fully appreciate spinning drives are slow on random reading, but should they be this slow <1000kB/s and what is the likely culprit? P440, the drives being mechanical etc?
The P440 its on latest fw 7 Sep 2022, infact the entire machine is as upto date as can be.