I find many simmilar posts here, but still can get answer on my question .
Our case is - truenas is running as VM inside proxmox. We have DELL 640 with HBA, DELL enterprise SSDs. On truenas is configured VM pool and shared as NFS storage to Proxmox. Everything is working, but performance is slow in my point of view. I have two nodes, on second is only 2 VMs(truenas, W Server 2019) running, so I test it now here on W server 2019.
When I set "no cache" option, read performance is about 1900MB/s, but write is about 206MB/s on 4k we are on 172MB vs 9.77. When I change write back cache instead on "no cache", read performance also drasticaly change read to 6200MB/s write to a5600MB/s and on 4k to 312MB/s vs 184MB/s.
My first question is - why read performance is also better with write cache?
Second is - can we increase write speed without write cache? It consume RAM by saveing disk with cache to RAM.
I migrate from vmware HA cluster(from old HP servers with 2.5 HDDs) and without write cache disc IO is much worse in my eyes. I feel, like we have bottleneck somewhere in the setup.
On left side is "no cache" on right "write back".
When we tested NFS iperf between proxmox and truenas, it run about 37-39 Gb per second. I also test QCOW vs RAW, but in that same test speeds are little worse with RAW.
I doesnt need super speeds on VMs, but we have production based on MS SQL, there are badly written interfaces and jobs(I cannot do with it nothing), and it slows down, Disk utilisation is 100% mostly(blue color in the graph).
Thanks for you ideas.
Our case is - truenas is running as VM inside proxmox. We have DELL 640 with HBA, DELL enterprise SSDs. On truenas is configured VM pool and shared as NFS storage to Proxmox. Everything is working, but performance is slow in my point of view. I have two nodes, on second is only 2 VMs(truenas, W Server 2019) running, so I test it now here on W server 2019.
When I set "no cache" option, read performance is about 1900MB/s, but write is about 206MB/s on 4k we are on 172MB vs 9.77. When I change write back cache instead on "no cache", read performance also drasticaly change read to 6200MB/s write to a5600MB/s and on 4k to 312MB/s vs 184MB/s.
My first question is - why read performance is also better with write cache?
Second is - can we increase write speed without write cache? It consume RAM by saveing disk with cache to RAM.
I migrate from vmware HA cluster(from old HP servers with 2.5 HDDs) and without write cache disc IO is much worse in my eyes. I feel, like we have bottleneck somewhere in the setup.
On left side is "no cache" on right "write back".
When we tested NFS iperf between proxmox and truenas, it run about 37-39 Gb per second. I also test QCOW vs RAW, but in that same test speeds are little worse with RAW.
I doesnt need super speeds on VMs, but we have production based on MS SQL, there are badly written interfaces and jobs(I cannot do with it nothing), and it slows down, Disk utilisation is 100% mostly(blue color in the graph).
Thanks for you ideas.