Hey,
i setup a "Storage" ZFS pool on my server with 2 8tb WD Red Pros as mirror. I gave parts of this pool to several VMs and created an ext4 filesystem inside those to make use of it. Now performance was terrible, worse than a single of these drives without any RAID, i got about 120-150 MB/s writes on average but with extremely high IO delay up to over 90% on the server. (The server has 64GB of ram installed with roughly 40-45 of that in use)
Now i read up on zfs a bit and installed an NVME SSD (Samsung 970 Evo, i know, its not made for this) and gave 60GB of SLOG and 400GB of Cache to the HDD-mirror.
Whats confusing me now is that nothing changed really. Whether read nor write speed changed after installing slog and cache.
I understand that not every write is in sync mode so SLOG doesnt necessarily help in every situation but shouldnt at least the cache help tremendously in write performance?!
Can someone help me understand this issue better? Did i misconfigure something?
Thanks a lot in advance!
i setup a "Storage" ZFS pool on my server with 2 8tb WD Red Pros as mirror. I gave parts of this pool to several VMs and created an ext4 filesystem inside those to make use of it. Now performance was terrible, worse than a single of these drives without any RAID, i got about 120-150 MB/s writes on average but with extremely high IO delay up to over 90% on the server. (The server has 64GB of ram installed with roughly 40-45 of that in use)
Now i read up on zfs a bit and installed an NVME SSD (Samsung 970 Evo, i know, its not made for this) and gave 60GB of SLOG and 400GB of Cache to the HDD-mirror.
Whats confusing me now is that nothing changed really. Whether read nor write speed changed after installing slog and cache.
I understand that not every write is in sync mode so SLOG doesnt necessarily help in every situation but shouldnt at least the cache help tremendously in write performance?!
Can someone help me understand this issue better? Did i misconfigure something?
Thanks a lot in advance!