Installation with ZFS mirror on SSD super slow

Amari

Member
May 12, 2020
4
0
21
32
Hi,
I just did a new installation of Proxmox and selected ZFS mirror with 2 SSDs during the installation. I used the default values with ashift 12.

Unfortunately, the 4k read/write performance is increadable slow compared to mdadm raid 1.

On mdadm1 I had around 100k IOPS with 4k block size. Now with zfs it's 10k and I can't figure out why. I always use the default settings from proxmox.

Disks are:
SAMSUNG MZQLB960HAJR-00007
SAMSUNG MZQLW960HMJP-00003

This is on mdadm:
Code:
fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/md2):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 398.70 MB/s  (99.6k) | 466.46 MB/s   (7.2k)
Write      | 399.75 MB/s  (99.9k) | 468.92 MB/s   (7.3k)
Total      | 798.45 MB/s (199.6k) | 935.39 MB/s  (14.6k)
           |                      |
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 449.28 MB/s    (877) | 419.24 MB/s    (409)
Write      | 473.15 MB/s    (924) | 447.16 MB/s    (436)
Total      | 922.43 MB/s   (1.8k) | 866.41 MB/s    (845)

This is with ZFS
Code:
fio Disk Speed Tests (Mixed R/W 50/50) (Partition rpool/ROOT/pve-1):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 55.19 MB/s   (13.7k) | 1.46 GB/s    (22.9k)
Write      | 55.28 MB/s   (13.8k) | 1.47 GB/s    (23.0k)
Total      | 110.48 MB/s  (27.6k) | 2.93 GB/s    (45.9k)
           |                      |
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 4.49 GB/s     (8.7k) | 2.99 GB/s     (2.9k)
Write      | 4.73 GB/s     (9.2k) | 3.19 GB/s     (3.1k)
Total      | 9.23 GB/s    (18.0k) | 6.18 GB/s     (6.0k)


I read that some SSDs have ashift 13. Could this make such a big difference? If I wouldn't need to book a KVM console in the datacenter, I would just reinstall and try it out. That is why I am trying to understand the problem first.
 
Last edited:
Thanks, I can try with your parameters.
My results are from yabs.sh benchmark. Not sure which default parameters they use.