Hi everyone,
here's some tests, maybe someone can explain how to make zfs faster:
test server configuration:
Configuration: Netinstall Debian 9 was installed and PM5 over it - it allows to partition drives like I want it.
Hybrid drives were build in HW Raid 1 as embedded controller has no jbod mode and 2 raid 0 merged in md or z-mirror were much slower as in not shown in results.
HDD configurations tested:
1. zfs created directly on /dev/sdc (Solaris /usr & Apple ZFS)
2. zfs created on /dev/sdc1 (gparted created primary partition)
3. ext4
4. xfs
Both zfs had cache and log parts on separate ssd's. sync = disabled, compression left default as it does not affect io, only cpu load, dedup=off
fio tool is used for testing. Configuration file:
it was run as:
fio vm-data.rand-read-write.ini --size=3G --filename /<mount point>/test
File size 3Gb was chosen after experiment testing when zfs experienced huge performance degradation if test had more than 2Gb file
To be completely sure nothing will affect results after each run reboot was made. Each test had 3 iterations.
So results:
Conclusions from results:
1. ZFS from default proxmox installation is about 30% slower when zfs made manually on native linux partition.
2. XFS is soooo slow that can be used in prod - that's the result I can't explain, 3Gb file used several hours to run. I didn't show it on chart.
3. ext4 runs much faster on writes that makes heavily loaded servers run much smoother than running on zfs especially when backups run.
4. If you still want ZFS - build it manually, not by PM installer
So I want to ask help from everyone who uses ZFS as a storage - what drive/raid configurations you use to make zfs running with no problems? Especially when PM5 is live and proposed run zfs....
here's some tests, maybe someone can explain how to make zfs faster:
test server configuration:
- CPU: 32 x Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz (2 Sockets)
- RAM: 128Gb
- 2x1Tb enterprize SSD: rootfs + cache + zil partitions (sda3 and sdb2) for zfs
- 2x2Tb Hybrid SHDD
- P710H mini embedded (2Gb cache)
Configuration: Netinstall Debian 9 was installed and PM5 over it - it allows to partition drives like I want it.
Hybrid drives were build in HW Raid 1 as embedded controller has no jbod mode and 2 raid 0 merged in md or z-mirror were much slower as in not shown in results.
HDD configurations tested:
1. zfs created directly on /dev/sdc (Solaris /usr & Apple ZFS)
2. zfs created on /dev/sdc1 (gparted created primary partition)
3. ext4
4. xfs
Both zfs had cache and log parts on separate ssd's. sync = disabled, compression left default as it does not affect io, only cpu load, dedup=off
fio tool is used for testing. Configuration file:
# cat vm-data.rand-read-write.ini
[readtest]
blocksize=4k
rw=randread
ioengine=libaio
iodepth=32
[writetest]
blocksize=4k
rw=randwrite
ioengine=libaio
iodepth=32
[readtest]
blocksize=4k
rw=randread
ioengine=libaio
iodepth=32
[writetest]
blocksize=4k
rw=randwrite
ioengine=libaio
iodepth=32
it was run as:
fio vm-data.rand-read-write.ini --size=3G --filename /<mount point>/test
File size 3Gb was chosen after experiment testing when zfs experienced huge performance degradation if test had more than 2Gb file
To be completely sure nothing will affect results after each run reboot was made. Each test had 3 iterations.
So results:
HW + ZFS(sdc) + C + L + sync (no directio)
Read Write
Run: 1 331164 11433
Run: 2 447599 8350
Run: 3 316630 10258
HW + ZFS(sdc1) + C + L + sync (no directio)
Read Write
Run: 1 273232 13052
Run: 2 356658 14071
Run: 3 596572 14037
mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/sdc1
EXT4 Read Write
Run: 1 438245 720835
Run: 2 458293 640547
Run: 3 397840 554997
XFS
Read Write
Run: 1 348 349
Run: 2 - -
Run: 3 - -
Read Write
Run: 1 331164 11433
Run: 2 447599 8350
Run: 3 316630 10258
HW + ZFS(sdc1) + C + L + sync (no directio)
Read Write
Run: 1 273232 13052
Run: 2 356658 14071
Run: 3 596572 14037
mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/sdc1
EXT4 Read Write
Run: 1 438245 720835
Run: 2 458293 640547
Run: 3 397840 554997
XFS
Read Write
Run: 1 348 349
Run: 2 - -
Run: 3 - -
Conclusions from results:
1. ZFS from default proxmox installation is about 30% slower when zfs made manually on native linux partition.
2. XFS is soooo slow that can be used in prod - that's the result I can't explain, 3Gb file used several hours to run. I didn't show it on chart.
3. ext4 runs much faster on writes that makes heavily loaded servers run much smoother than running on zfs especially when backups run.
4. If you still want ZFS - build it manually, not by PM installer
So I want to ask help from everyone who uses ZFS as a storage - what drive/raid configurations you use to make zfs running with no problems? Especially when PM5 is live and proposed run zfs....