ZFS performance issues

stijnos

New Member
Jan 23, 2022
2
0
1
33
Hello!

First of all let me start by saying that I'm aware I do not have the recommended hardware and setup as stated in the official documentation. I do not use enterprise-grade SSD's.
However, I still would like to ask your help in diagnosing and optimizing my storage settings. I hope I can squeeze out some more performance.

I have a Proxmox host with:
CPU: Intel Core i3-9100 (4 cores @ 3,6GHz)
RAM: 32GB DDR4
Storage: - 2x Intel 870QVO 1TB SATA SSD
- 2x 4TB Western Digital harddrives

The issue: When copying some files to a VM running on ZFS storage, I see the RAM usage on my Proxmox host fill-up and then file-transfer performance drops significant. The transfer starts at 110MB/s, saturating my gigabit connection, and then drops to 0kb/s for a few seconds to increase to around 15MB/s. RAM usage before the file transfer was around 60-70%. The issue for me is the speed-drops of course, I don't care if it fills up my RAM or not. That is just the behaviour I notice when doing a file transfer.

zfs performance.PNG

When copying to the hard drives, which are simple directory's on EXT4, I have no performance drops.

It is clear to me that ZFS takes a huge performance penalty on my hardware. But I hope it is due to my settings, rather than hardware limitations.

Configuration:
The two 870QVO SSD's are configured as a zpool in mirror-1, compression=on (default settings used). I'm a total ZFS noob so I have no idea where to start optimizing. I'm not expecting world-class performance, but to be honest I expected to be able to saturate my gigabit line when writing to SSD storage.
I have another proxmox host available, so if absolutely necessary I can delete the current ZFS pool and re-create it with different settings if that would give me more performance.

So, TL;DR: what can I do (on the software side) to optimize ZFS performance?
 
When copying some files to a VM running on ZFS storage, I see the RAM usage on my Proxmox host fill-up and then file-transfer performance drops significant. The transfer starts at 110MB/s, saturating my gigabit connection, and then drops to 0kb/s for a few seconds to increase to around 15MB/
The RAM filling up is the ZFS ARC (adaptive replacement cache AKA read cache). It will fill up to 50% of memory if available and needed. It will also free that memory if other processes want it.

Now, regarding your performance drops; unfortunately, those are related to the used SSDs. The Samsung QVOs are bad when it comes to sustained large writes. Once their cache is full, data needs to be written directly to the actual storage cells, which are QLC. Once that happens, performance is abysmal.

You can clearly see in the node summary how the IO delay (how long a process needs to wait for IO to finish) is exploding (dark graph in the top right).

When it comes to SSDs, especially in the consumer range, one needs to be very careful when considering what to buy. Look for benchmarks that did run for more than a few minutes and for now, avoid anything QLC. QLC SSDs might get better in the future, but for now, they are usually a bad idea.

And even if a consumer SSD looks okay in the specs and some older benchmarks, search the internet for recent news about it. In the last years it happened a few times that manufacturers would change their SSDs considerably (different controller, TLC to QLC chips, ...), thus changing the performance of those SSDs, without marking it clearly as a new model or revision. Therefore, what you buy and what has been reviewed a year or two ago are can be very different things.


One more thing regarding the QVOs: I do have 2 870 QVOs 4T personally (as backup storage) and did some benchmarks before using them.

For small writes (4k) it made no difference if the benchmark ran 1 or 10 minutes as the cache never filled up.
When benchmarking with larger writes (4M) for one minute, the SSD seemed okayish, resulting in ~450MB/s.

Benchmarking large writes for 10 minutes, showed how bad they are for sustained large writes. After about 3 minutes, the write performance dropped down to somewhere between 80 and 105 MB/s and the worst measurement was even down to 65MB/s.

So yeah, consider them if what you need mostly is fast (random) reads. But do not expect anything great regarding writes, especially if you need to write a lot.
So, TL;DR: what can I do (on the software side) to optimize ZFS performance?
Unfortunately: get non QLC SSDs that have good benchmarks for long and large writes
 
Last edited:
I was basically writing the same as aaron...so I guess I don't need to finish that.

You can't make a very slow SSD fast with ZFS config optimizations. Just make sure to enable lz4 compression so less data needs to be written to the SSD (but you already got that). Enableing relatime might also help a litte bit so not every read will cause a write to update the atime. The default 8K volblocksize should be fine if you only use two disks. So there is not much that can be done to optimize it.

So its basically a typical case of "buy cheap and you buy twice". Search this forum for people who are complaining about horrible ZFS performance running Samsung QVOs. Then they buy proper enterprise SSDs or atleast TLC/MLC consumer SSD and all problems are solved.
 
Last edited:
  • Like
Reactions: 0bit and aaron
I wanted to confirm that my problems have been solved by switching to one decent Enterprise SSD. I now run one Intel S4510 1,92TB TLC and performance has been smooth.
I replicate and backup my VM's, so I take the risk of running one (proper) disk instead of 2 drives mirrored.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!