ZFS Noob - looking for some guidance on performance -> SAS3 SSDs in ZFS-RAID10

fuzzybeanbag

New Member
Dec 22, 2022
7
0
1
Hi all,

System specs are: Xeon D-1528 6c/12t, 32 gigs of RAM with a Supermicro aoc-s3008l-l8e (SAS3008 based) powering 4x samsung SAS3 Enterprise SSDs. Link speed is 12 gbp/s - so hardware wise we seem to be good.

I've setup a simple RAID10 zpool, sync disabled (server is on a UPS), lz4 compression - this was setup via Proxmox UI.

With sync disabled, I am getting what seem like poor performance for the hardware. Running the following command from within a VM running on the pool:

fio --name=seqwrite --rw=write --direct=1 --bs=256k --numjobs=8 --size=10G --runtime=600 --group_reporting

got the following results:

Run status group 0 (all jobs): WRITE: bw=567MiB/s (595MB/s), 567MiB/s-567MiB/s (595MB/s-595MB/s), io=80.0GiB (85.9GB), run=144359-144359msec Disk stats (read/write): dm-0: ios=974/331923, merge=0/0, ticks=5040/1095104, in_queue=1100144, util=98.20%, aggrios=254/328410, aggrmerge=720/3559, aggrticks=1296/1059505, aggrin_queue=1062035, aggrutil=97.91% sda: ios=254/328410, merge=720/3559, ticks=1296/1059505, in_queue=1062035, util=97.91%

Granted there will be a bit of overhead doing it from a VM, but I feel I should have better write speeds than 600 megs a second...

What am I missing here? Should I have created the pool via CLI with custom flags (and if so - is there a guide that's more or less ELI5-grade?) rather than doing it in PVE UI?

For reference - on a 10gig connection (home is 10 gig'd), if I SCP a 5 gig file from that VM to my mac studio over 10 gig, I get ~150mb/s transfer speeds :\

Thanks in advance!
 
Last edited:
For comparison - this is the same job on a RAIDz1 setup running 4x WD REDs 5400 RPM on TrueNas Scale :(

Run status group 0 (all jobs): WRITE: bw=296MiB/s (310MB/s), 296MiB/s-296MiB/s (310MB/s-310MB/s), io=80.0GiB (85.9GB), run=276719-276719msec
 
sync disabled (server is on a UPS)
Won't prevent you from losing your whole pool on a kernel crash or hardware defect. So keep recent backups.

You could try to run fio directly on the host to see if virtualization effects the performance. First create an empty zvol (zfs create -V 16G YourPool/benchmark) and then let fio directly write to that zvol using "--filename=/dev/zvol/YourPool/benchmark".

You could also destroy the ZFS pool and use fio to benchmark the individual disks (directly writing to the disks, destroying its contents) to see if you got a bad one that is slowing the whole pool down.
 
I ended up destroying the pool and changing the ashift to 13 (for SSDs), disabled atime and sync - getting about 800 mb/s. I'll take that with the old CPU I have.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!