Slow performance with ZFS?

TheDulki

New Member
Mar 22, 2015
5
0
1
Hello. I'm having some performance issues with ZFS 0.6.4 on Proxmox 3.4.
I'm running two WD Green 2TB drives in raid 1. When i run the hdparm tool inside a vm, im getting anywhere from 40 to 50 mb/s.
my zfs_arc_max is set to 4GB.
Is this considered normal?
 
With green drives you need to use a small SSD for L2ARC and possibly SLOG caches. 40 to 50MB/s is not surprising for green drives, even fast WD black gaming drives are pushing to get up to 100MB/s they might give 80MB/s reliably. The SSD cache will help.
 
Last edited:
Does he not need two small ssd drives? The day the ssd goes bad, and he reboot's the machine, will he not have a very bad day?
 
Does he not need two small ssd drives? The day the ssd goes bad, and he reboot's the machine, will he not have a very bad day?

Not for L2ARC, only if you choose to run with SLOG, which benefits from a mirror. Remember that SLOG should never be read unless there is a system crash, it's just a backup copy of data with the sync bit set, that's along with everything else in the normal RAM queue to be asynchronously written to the HD say every 30secs anyway, SLOG only helps apps which write synchronously, most don't as async is the default. I doubt the OP would be running serious synchronous apps like a busy database, with only 4GB ARC and green drives.
 
Last edited:
If i bought 2 2TB red drives and put them in a raid 10 with the 2tb green drives, i would be getting much better performance right?
 
If i bought 2 2TB red drives and put them in a raid 10 with the 2tb green drives, i would be getting much better performance right?

The red drives are slower than the green, both are only 5400 rpm. Both are terrible options for ZFS. If you need speed use the black drives.
 
Last edited:
The red drives are slower than the green, both are only 5400 rpm. Both are terrible options for ZFS. If you need speed use the black drives.

FYI
4 re4 enterprise drives 7200 rpm
proxmox 4
raid 10 zfs
1 128 gig samsung pro ssd for larc (Have no idea what difference this makes)

Inside a vm, here are my results.

hdparm -tT /dev/vdb

/dev/vdb:
Timing cached reads: 20098 MB in 2.00 seconds = 10058.33 MB/sec
Timing buffered disk reads: 778 MB in 3.00 seconds = 259.30 MB/sec
 
For me, RED drives worth it because of their longer warranty period (3y). I'm using several ZFS pools built of those drives, mainly for online backup purposes. A significant difference between the Red and Green is that the Reds have a better suspension system to damp vibrations more effectively, making them more suitable for arrays, allowing a larger MTBF and of course they are officially supported for RAID and also support TLER in firmware. I measured linear read speed of around 70 to 140 MB/s (inside of platter to edge). Scrubbing a pool consisting of a stripe of mirrors (RAID10-like) goes about 50-60 MB/s IIRC which I find a little slow but I don't care in these particular pools. But yeah, don't expect very good performance with only the bare drives. Adding L2ARC and ZIL on SSD will help a lot with performance and robustness.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!