Hello Everyone,
I would be interested to understand whether anyone else has similar similar results, and what they've traced the issue to.
For evaluation purposes we have been testing some NVMe drives in a Proxmox cluster in both Local Drive and ZFS configuration (single disk, Micron 3.2TB 9300 MAX, rated at 3500 MB/s, 3100 MB/s; 835k iops, 310k iops for read and write respectively ). For ZFS we have tried both with and without compression and encryption.
At a high level the results as executed from a shell on the host: (AMD EPYC 2x 7542 32-core processor, 1TB RAM):
fio --filename=<STORAGE_PATH>/10G --size=10GB --direct=1 --rw=randrw --bs=8k --ioengine=libaio --iodepth=8 --runtime=60 --numjobs=4 --time_based --group_reporting --name=<STORAGE>_10G_8k_8_4X --eta-newline=1 --output-format=normal --output <STORAGE_PATH>_10G_8k_8_4X.txt
Results within the same storage destination vary 5%-10% from each other, but here is a representative middle of the pack result for each:
ZFS encrypted, uncompressed: Jobs: 4 (f=4): [m(4)][42.6%][r=145MiB/s,w=146MiB/s][r=18.6k,w=18.7k IOPS]
ZFS encrypted, compressed: Jobs: 4 (f=4): [m(4)][55.0%][r=145MiB/s,w=146MiB/s][r=18.5k,w=18.7k IOPS]
Data Disk w/ EXT4: Jobs: 4 (f=4): [m(4)][59.0%][r=822MiB/s,w=824MiB/s][r=105k,w=105k IOPS]
For relative comparison to another NVMe: Intel DC P4511 2TB rated at 2100 MB/s, 1430 MB/s; 295k iops, 36k iops:
Boot Disk EXT4: Jobs: 4 (f=4): [m(4)][48.3%][r=517MiB/s,w=518MiB/s][r=66.2k,w=66.3k IOPS]
The ZFS performance seems to be absolutely crippled at least as seen by fio using the parameters we chose to test with. We did try a few different iodepths, numjobs, and block sizes, but have not done exhaustive testing as yet. Before investing more time in this I wanted to see whether others had similar experiences.
Thanks in advance.
I would be interested to understand whether anyone else has similar similar results, and what they've traced the issue to.
For evaluation purposes we have been testing some NVMe drives in a Proxmox cluster in both Local Drive and ZFS configuration (single disk, Micron 3.2TB 9300 MAX, rated at 3500 MB/s, 3100 MB/s; 835k iops, 310k iops for read and write respectively ). For ZFS we have tried both with and without compression and encryption.
At a high level the results as executed from a shell on the host: (AMD EPYC 2x 7542 32-core processor, 1TB RAM):
fio --filename=<STORAGE_PATH>/10G --size=10GB --direct=1 --rw=randrw --bs=8k --ioengine=libaio --iodepth=8 --runtime=60 --numjobs=4 --time_based --group_reporting --name=<STORAGE>_10G_8k_8_4X --eta-newline=1 --output-format=normal --output <STORAGE_PATH>_10G_8k_8_4X.txt
Results within the same storage destination vary 5%-10% from each other, but here is a representative middle of the pack result for each:
ZFS encrypted, uncompressed: Jobs: 4 (f=4): [m(4)][42.6%][r=145MiB/s,w=146MiB/s][r=18.6k,w=18.7k IOPS]
ZFS encrypted, compressed: Jobs: 4 (f=4): [m(4)][55.0%][r=145MiB/s,w=146MiB/s][r=18.5k,w=18.7k IOPS]
Data Disk w/ EXT4: Jobs: 4 (f=4): [m(4)][59.0%][r=822MiB/s,w=824MiB/s][r=105k,w=105k IOPS]
For relative comparison to another NVMe: Intel DC P4511 2TB rated at 2100 MB/s, 1430 MB/s; 295k iops, 36k iops:
Boot Disk EXT4: Jobs: 4 (f=4): [m(4)][48.3%][r=517MiB/s,w=518MiB/s][r=66.2k,w=66.3k IOPS]
The ZFS performance seems to be absolutely crippled at least as seen by fio using the parameters we chose to test with. We did try a few different iodepths, numjobs, and block sizes, but have not done exhaustive testing as yet. Before investing more time in this I wanted to see whether others had similar experiences.
Thanks in advance.