I have a strange behavior here and after 2 weeks of fiddling around I am somewhat out of options.
There are a proxmox cluster, a freenas box with 8 drive bays and I used the freenas as backup device via NFS.
So I had 4 HDDs in the freenas as raidz1. Freenas box is connected via a dedicated 10G network to the proxmox cluster.
Then I bought 4 SSDs and put them into the remaining 4 drivebays of the freenas box, creating a raidz1 pool as well and attached it to the proxmox cluster.
All fine.... except when I switched my proxmox backups from being stored on the freenas/HDD/NFS storage to the freenas/SSD/NFS storage the backups needed even longer.
Bummer.
So to sum it up: The whole setup (HDD vs SSD) is the same except - one pool consists of 4 HDDs whereas the other pool consists of 4 SSDs.
HDDs and SSDs are no high end devices, just consumer products (Yes, I know ) but it is used just for backups.
More exactly:
4x ST1000LM048 Seagate Barracuda 1TB
4x Samsung SSD 860 QVO 1TB
So I tested the speed with fio on the proxmox side and on the freenas side:
freenas box:
HDD:
# cd /mnt/NASPOOL/NFS_freenas
# fio --size=9G --bs=64k --rw=write --direct=1 --runtime=60 --name=64kwrite --group_reporting | grep bw
bw ( KiB/s): min=83944, max=1474398, per=99.59%, avg=405604.50, stdev=303969.18, samples=46
WRITE: bw=398MiB/s (417MB/s), 398MiB/s-398MiB/s (417MB/s-417MB/s), io=9216MiB (9664MB), run=23171-23171msec
SSD:
# cd /mnt/test1/ssd_pool
# fio --size=9G --bs=64k --rw=write --direct=1 --runtime=60 --name=64kwrite --group_reporting | grep bw
bw ( KiB/s): min=214738, max=1411704, per=99.58%, avg=442550.05, stdev=314083.19, samples=42
WRITE: bw=434MiB/s (455MB/s), 434MiB/s-434MiB/s (455MB/s-455MB/s), io=9216MiB (9664MB), run=21235-21235msec
So yep, the SSDs are not impressive, but at least they are not slower than the HDDs
proxmox:
HDD:
#cd /mnt/pve/NFS_freenas
# fio --size=9G --bs=64k --rw=write --direct=1 --runtime=60 --name=64kwrite --group_reporting | grep bw
write: io=6410.8MB, bw=109407KB/s, iops=1709, runt= 60001msec
SSD:
#cd /mnt/pve/ssd
# fio --size=9G --bs=64k --rw=write --direct=1 --runtime=60 --name=64kwrite --group_reporting | grep bw
write: io=1103.9MB, bw=18838KB/s, iops=294, runt= 60001msec
So on the proxmox side the SSD speed is horrible. I also tried to use just a single SSD instead of the raidz1 on the freenas box, but the bw values stayed - more or less- the same. What puzzles me most is that on the freenas side it looks ok, whereas on the proxmox side the SSDs are not even 20% of the speed of the HDDs.
I am open for any ideas what to try out to solve that mystery...
There are a proxmox cluster, a freenas box with 8 drive bays and I used the freenas as backup device via NFS.
So I had 4 HDDs in the freenas as raidz1. Freenas box is connected via a dedicated 10G network to the proxmox cluster.
Then I bought 4 SSDs and put them into the remaining 4 drivebays of the freenas box, creating a raidz1 pool as well and attached it to the proxmox cluster.
All fine.... except when I switched my proxmox backups from being stored on the freenas/HDD/NFS storage to the freenas/SSD/NFS storage the backups needed even longer.
Bummer.
So to sum it up: The whole setup (HDD vs SSD) is the same except - one pool consists of 4 HDDs whereas the other pool consists of 4 SSDs.
HDDs and SSDs are no high end devices, just consumer products (Yes, I know ) but it is used just for backups.
More exactly:
4x ST1000LM048 Seagate Barracuda 1TB
4x Samsung SSD 860 QVO 1TB
So I tested the speed with fio on the proxmox side and on the freenas side:
freenas box:
HDD:
# cd /mnt/NASPOOL/NFS_freenas
# fio --size=9G --bs=64k --rw=write --direct=1 --runtime=60 --name=64kwrite --group_reporting | grep bw
bw ( KiB/s): min=83944, max=1474398, per=99.59%, avg=405604.50, stdev=303969.18, samples=46
WRITE: bw=398MiB/s (417MB/s), 398MiB/s-398MiB/s (417MB/s-417MB/s), io=9216MiB (9664MB), run=23171-23171msec
SSD:
# cd /mnt/test1/ssd_pool
# fio --size=9G --bs=64k --rw=write --direct=1 --runtime=60 --name=64kwrite --group_reporting | grep bw
bw ( KiB/s): min=214738, max=1411704, per=99.58%, avg=442550.05, stdev=314083.19, samples=42
WRITE: bw=434MiB/s (455MB/s), 434MiB/s-434MiB/s (455MB/s-455MB/s), io=9216MiB (9664MB), run=21235-21235msec
So yep, the SSDs are not impressive, but at least they are not slower than the HDDs
proxmox:
HDD:
#cd /mnt/pve/NFS_freenas
# fio --size=9G --bs=64k --rw=write --direct=1 --runtime=60 --name=64kwrite --group_reporting | grep bw
write: io=6410.8MB, bw=109407KB/s, iops=1709, runt= 60001msec
SSD:
#cd /mnt/pve/ssd
# fio --size=9G --bs=64k --rw=write --direct=1 --runtime=60 --name=64kwrite --group_reporting | grep bw
write: io=1103.9MB, bw=18838KB/s, iops=294, runt= 60001msec
So on the proxmox side the SSD speed is horrible. I also tried to use just a single SSD instead of the raidz1 on the freenas box, but the bw values stayed - more or less- the same. What puzzles me most is that on the freenas side it looks ok, whereas on the proxmox side the SSDs are not even 20% of the speed of the HDDs.
I am open for any ideas what to try out to solve that mystery...