zfs performance

a1979

New Member
Apr 25, 2018
7
0
1
44
Hi.
i have server with hard-raid. The raid has 8 ssd. I made two logical disk. In first disk i use ext4, where is OS-system. On second disk i had test with ext4 and zfs.
When use ext4 i have perfomance 1.3 GB/s. When use zfs and compresion=off then i have 540MB/s. When use zfs and compresion=on, then i have 1.3GB/s.

Is it normal that ext4 have 1.3GB/s and not use compresion and zfs have 1.3GB/s with compresion=on only?
Thank you.

comman fo my test:
sync; dd if=/dev/zero of=/rpool/test5 bs=1M count=204800; sync
 
First, dd ist Not a good tool for benchmarks.

Second,using ZFS on a RAID controller is not good practice. ZFS wants the disks as naked as possible and a RAID controller abstracts too much from the physical disks and has its own cache which can interfere.

Third, ZFS can be configured in many different ways with different levels of special cache devices and special devices to store metadata which all should be on fast SSDs which alleviates some of the slow performance of HDDs. Without knowing how exactly you set it up it is hard to judge.

Fourth: besides all the above points, yes, ZFS can have a slightly worse performance depending on these cases, compared to simpler file systems like ext4 or xfs. What you get in return is a very high level of data consistency and advanced features.
 
  • Like
Reactions: kwinz
server-II

zfs set compression=off rpool

proxmox1:~# sync; dd if=/dev/zero of=/rpool/backup/test1 bs=1M count=10240; sync
10737418240 bytes (11 GB, 10 GiB) copied, 114.36 s, 93.9 MB/s

zfs set compression=on rpool
sync; dd if=/dev/zero of=/rpool/backup/test2 bs=1M count=10240; sync

10737418240 bytes (11 GB, 10 GiB) copied, 4.63618 s, 2.3 GB/s

I have default configuration after installed proxmox6. I don't use cache disk and raid.

93.9 MB/s = 2.3 GB/s. Crezzy option! :)
 
Last edited:
dd'ing from /dev/zero with compression on shows how fast your CPU can deliver zeros, compress them and how much metadata around it actually needs to be written to disk ;)

If you want to do benchmarking checkout fio which will do benchmarking well and give you better insight into IOPS and bandwidth, reading and writing. Best do benchmarks with small blocksizes like 512b and larger block sizes like 4k and 8k.
Don't forget to do random write/read tests as well as with random patterns performance usually goes down quite a bit. It does resemble most real life use cases better though.
 
  • Like
Reactions: kwinz
As always using this with fio gives you a comparable test:
Code:
# This job file tries to mimic the Intel IOMeter File Server Access Pattern
[global]
description=Emulation of Intel IOmeter File Server Access Pattern

[iometer]
bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
rw=randrw
rwmixread=80
direct=1
size=4g
#size=1g
ioengine=libaio
# IOMeter defines the server loads as the following:
# iodepth=1    Linear
# iodepth=4    Very Light
# iodepth=8    Light
# iodepth=64    Moderate
# iodepth=256    Heavy
iodepth=64
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!