Yeah yeah I know, another ZFS perf issue...
Seriously though, I've read some threads on this forum and others regarding ZFS issue, sometimes read, sometimes write, sometimes both, and well, in my case it's read issue.
See below the comparison between my HDD with ext4 and ZFS. I only tested with DD, I know it's not the most reliable but I can perform more tests with bonnie or fio or display the output of iotop if requested.
It's a real simple test with one HDD in single disk mode.
CPU Ryzen 24 threads
64B RAM
Simple GPT partition with gdisk + mkfs.ext4 with default parameters.
Single disk pool created through the WebUI. Ashift = 12 and compression on.
Compression off does not change anything
For all the read test, CPU is hitting 100% on one core only.
For testing purpose, here is a raidz2 with 6 HDD and a recordsize of 1M
Changing the recordsize to 128K helped a lot but still...
Thanks guys !
Seriously though, I've read some threads on this forum and others regarding ZFS issue, sometimes read, sometimes write, sometimes both, and well, in my case it's read issue.
See below the comparison between my HDD with ext4 and ZFS. I only tested with DD, I know it's not the most reliable but I can perform more tests with bonnie or fio or display the output of iotop if requested.
It's a real simple test with one HDD in single disk mode.
CPU Ryzen 24 threads
64B RAM
Simple GPT partition with gdisk + mkfs.ext4 with default parameters.
Bash:
root@pve:/home/okeur# dd if=/dev/zero of=/root/test-sdg/test bs=1G count=64
64+0 records in
64+0 records out
68719476736 bytes (69 GB, 64 GiB) copied, 270.184 s, 254 MB/s
root@pve:/home/okeur# dd if=/root/test-sdg/test of=/dev/null
134217728+0 records in
134217728+0 records out
68719476736 bytes (69 GB, 64 GiB) copied, 261.992 s, 262 MB/s
root@pve:/home/okeur#
Single disk pool created through the WebUI. Ashift = 12 and compression on.
Compression off does not change anything
For all the read test, CPU is hitting 100% on one core only.
Bash:
root@pve:/home/okeur# dd if=/dev/zero of=/ZFS-SDG/test bs=1G count=64
64+0 records in
64+0 records out
68719476736 bytes (69 GB, 64 GiB) copied, 35.1505 s, 2.0 GB/s
root@pve:/home/okeur# dd if=/ZFS-SDG/test of=/dev/null
^C1993255+0 records in
1993254+0 records out
1020546048 bytes (1.0 GB, 973 MiB) copied, 48.1261 s, 21.2 MB/s
For testing purpose, here is a raidz2 with 6 HDD and a recordsize of 1M
Bash:
root@pve:/home/okeur# dd if=/dev/zero of=/VM-Storage/test bs=1G count=64
64+0 records in
64+0 records out
68719476736 bytes (69 GB, 64 GiB) copied, 32.487 s, 2.1 GB/s
root@pve:/home/okeur# dd if=/VM-Storage/test of=/dev/null
^C1677102+0 records in
1677101+0 records out
858675712 bytes (859 MB, 819 MiB) copied, 118.629 s, 7.2 MB/s
Changing the recordsize to 128K helped a lot but still...
Bash:
root@pve:/home/okeur# zfs set recordsize=128K VM-Storage
root@pve:/home/okeur# dd if=/dev/zero of=/VM-Storage/test2 bs=1G count=64
64+0 records in
64+0 records out
68719476736 bytes (69 GB, 64 GiB) copied, 35.5218 s, 1.9 GB/s
root@pve:/home/okeur# dd if=/VM-Storage/test2 of=/dev/null
^C1794359+0 records in
1794358+0 records out
918711296 bytes (919 MB, 876 MiB) copied, 43.0306 s, 21.4 MB/s
Thanks guys !
Last edited: