Hello,
I haven't found this question yet, but I think there is an answer to the other question "about ZFS performance". I am struggling with low performance of ZFS on SSDs (Samsung Evo 850/860). I started looking at the RAID card settings and the KVM disk settings. I conducted tests that showed that so far, most likely I did it wrong copying server to server. The first one went to the hardware RAID card where we have the write cache setting. I did the tests with Write Through and Write Back (used by default by me). I didn't feel the difference although the tests showed several MB/s more/less. Then I carried out the test by changing the KVM disk settings for the results:
Dell R620, Perc H710p Mini
6x 500GB SSD RAID 10
VPS: write through/write back
write 1050MiB/s
read 700MiB/s
VPS: default (no cache)
write 1300MiB/s
read 860MiB/s
Doesn't having the WriteBack or WriteThrough setting on the RAID controller and KVM drive cause this double-write effect? why are the speeds so different? I will add that the default setting without cache showed what I would like to see on a loaded production machine than in the case of Write Through or Write Back. In addition, there is a double write through the MySQL database which causes it to obtain a triple write and performance degradation about which customers write. I noticed on the test machine, ie Proxmox with the v5.3.x kernel and ZFS version 0.8.2, that the load of the machine as well as the use of RAM seems to be more predicted than in the case of my previous machines. My previous machines at higher disk load obtained increased load, greater heaviness and proxmox always exceeded the load with the number of installed cores / threads of the processor, which as we know after exceeding the machine's performance may drop significantly.
In addition, I did a test with the "Logs" disk and the "L2Arc" disk. After adding the "Logs" disk, the writing speed of 128 files of 16MB each was:
No disk "Logs": 28sec creation time, write speed 100-115MB/s
With disk "Logs": 8sec creation time, write speed 250-265MB/s
I also read that ZFS needs best access to each disk separately, so the above tests were unfortunately carried out in RAID 0 mode with one disk where there were 6 (RAID 10 ZFS) or 7 (with a disk for "Logs"). I am asking for help / hint if I am going in the right direction and if I should not change something yet so that ZFS is not the bottleneck it is now.
I will add that my previous machines are Debian 9 with the kernel v4.x and ZFS v0.7.x (I read that this version had performance problems).
I haven't found this question yet, but I think there is an answer to the other question "about ZFS performance". I am struggling with low performance of ZFS on SSDs (Samsung Evo 850/860). I started looking at the RAID card settings and the KVM disk settings. I conducted tests that showed that so far, most likely I did it wrong copying server to server. The first one went to the hardware RAID card where we have the write cache setting. I did the tests with Write Through and Write Back (used by default by me). I didn't feel the difference although the tests showed several MB/s more/less. Then I carried out the test by changing the KVM disk settings for the results:
Dell R620, Perc H710p Mini
6x 500GB SSD RAID 10
VPS: write through/write back
write 1050MiB/s
read 700MiB/s
VPS: default (no cache)
write 1300MiB/s
read 860MiB/s
Doesn't having the WriteBack or WriteThrough setting on the RAID controller and KVM drive cause this double-write effect? why are the speeds so different? I will add that the default setting without cache showed what I would like to see on a loaded production machine than in the case of Write Through or Write Back. In addition, there is a double write through the MySQL database which causes it to obtain a triple write and performance degradation about which customers write. I noticed on the test machine, ie Proxmox with the v5.3.x kernel and ZFS version 0.8.2, that the load of the machine as well as the use of RAM seems to be more predicted than in the case of my previous machines. My previous machines at higher disk load obtained increased load, greater heaviness and proxmox always exceeded the load with the number of installed cores / threads of the processor, which as we know after exceeding the machine's performance may drop significantly.
In addition, I did a test with the "Logs" disk and the "L2Arc" disk. After adding the "Logs" disk, the writing speed of 128 files of 16MB each was:
No disk "Logs": 28sec creation time, write speed 100-115MB/s
With disk "Logs": 8sec creation time, write speed 250-265MB/s
I also read that ZFS needs best access to each disk separately, so the above tests were unfortunately carried out in RAID 0 mode with one disk where there were 6 (RAID 10 ZFS) or 7 (with a disk for "Logs"). I am asking for help / hint if I am going in the right direction and if I should not change something yet so that ZFS is not the bottleneck it is now.
I will add that my previous machines are Debian 9 with the kernel v4.x and ZFS v0.7.x (I read that this version had performance problems).