Freenas HDD and SSD pools used on Proxmox (NFS) - SSD pool has lousy speed on proxmox side

prxtester

Active Member
Mar 16, 2019
7
0
41
47
I have a strange behavior here and after 2 weeks of fiddling around I am somewhat out of options.
There are a proxmox cluster, a freenas box with 8 drive bays and I used the freenas as backup device via NFS.
So I had 4 HDDs in the freenas as raidz1. Freenas box is connected via a dedicated 10G network to the proxmox cluster.

Then I bought 4 SSDs and put them into the remaining 4 drivebays of the freenas box, creating a raidz1 pool as well and attached it to the proxmox cluster.
All fine.... except when I switched my proxmox backups from being stored on the freenas/HDD/NFS storage to the freenas/SSD/NFS storage the backups needed even longer.
Bummer.

So to sum it up: The whole setup (HDD vs SSD) is the same except - one pool consists of 4 HDDs whereas the other pool consists of 4 SSDs.
HDDs and SSDs are no high end devices, just consumer products (Yes, I know ;) ) but it is used just for backups.
More exactly:
4x ST1000LM048 Seagate Barracuda 1TB
4x Samsung SSD 860 QVO 1TB

So I tested the speed with fio on the proxmox side and on the freenas side:

freenas box:
HDD:
# cd /mnt/NASPOOL/NFS_freenas
# fio --size=9G --bs=64k --rw=write --direct=1 --runtime=60 --name=64kwrite --group_reporting | grep bw
bw ( KiB/s): min=83944, max=1474398, per=99.59%, avg=405604.50, stdev=303969.18, samples=46
WRITE: bw=398MiB/s (417MB/s), 398MiB/s-398MiB/s (417MB/s-417MB/s), io=9216MiB (9664MB), run=23171-23171msec

SSD:
# cd /mnt/test1/ssd_pool
# fio --size=9G --bs=64k --rw=write --direct=1 --runtime=60 --name=64kwrite --group_reporting | grep bw
bw ( KiB/s): min=214738, max=1411704, per=99.58%, avg=442550.05, stdev=314083.19, samples=42
WRITE: bw=434MiB/s (455MB/s), 434MiB/s-434MiB/s (455MB/s-455MB/s), io=9216MiB (9664MB), run=21235-21235msec

So yep, the SSDs are not impressive, but at least they are not slower than the HDDs ;)

proxmox:
HDD:
#cd /mnt/pve/NFS_freenas
# fio --size=9G --bs=64k --rw=write --direct=1 --runtime=60 --name=64kwrite --group_reporting | grep bw
write: io=6410.8MB, bw=109407KB/s, iops=1709, runt= 60001msec

SSD:
#cd /mnt/pve/ssd
# fio --size=9G --bs=64k --rw=write --direct=1 --runtime=60 --name=64kwrite --group_reporting | grep bw
write: io=1103.9MB, bw=18838KB/s, iops=294, runt= 60001msec

So on the proxmox side the SSD speed is horrible. I also tried to use just a single SSD instead of the raidz1 on the freenas box, but the bw values stayed - more or less- the same. What puzzles me most is that on the freenas side it looks ok, whereas on the proxmox side the SSDs are not even 20% of the speed of the HDDs.

I am open for any ideas what to try out to solve that mystery...
 
Hmm, running fio directly on the freenas box with `--direct=1` will avoid buffers while I am not sure if buffers on the freenas box will interfere when writing to the NFS share from the PVE host.

What are the results if you run fio directly on the freenas box without `--direct=1`? Closer to what you see on the PVE node?

Also, those SSDs do seem terribly slow once their buffer is full [0]. Page 5, note 3).
3)Sequential write performance measurements are based on Intelligent TurboWrite technology. Performances after Intelligent Turbowrite region are 80 MB/s (1TB), 160 MB/s (2/4TB).

[0] https://s3.ap-northeast-2.amazonaws.com/global.semi.static/Samsung_SSD_860_QVO_Data_Sheet_Rev1.pdf
 
Hmm, running fio directly on the freenas box with `--direct=1` will avoid buffers while I am not sure if buffers on the freenas box will interfere when writing to the NFS share from the PVE host.

What are the results if you run fio directly on the freenas box without `--direct=1`? Closer to what you see on the PVE node?

Ok, so on the freenas box without `--direct=1` :

# cd /mnt/test1/ssd_pool
# fio --size=9G --bs=64k --rw=write --runtime=60 --name=64kwrite --group_reporting | grep bw
bw ( KiB/s): min=244247, max=1549668, per=99.60%, avg=444121.98, stdev=329387.20, samples=42
WRITE: bw=435MiB/s (457MB/s), 435MiB/s-435MiB/s (457MB/s-457MB/s), io=9216MiB (9664MB), run=21165-21165msec

so quite the same.

And yes, that the QVO are not high-end products but as far as I know they have a 40G cache which should be more than enough for all my backups (it's just a little test cluster)

However to rule out lousy SSDs I will gather some other SSDs and replace them and do some more tests.
It is just weird that the performance is only 1/5 of that of the HDDs (which are not high-performance products either )
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!