SSD Storage ZFS advice

gaiex

Member
Nov 16, 2021
10
0
6
42
Hello,

I'm relativly new to ZFS and I've read and search for a direct answer but got very confused with ZFS storage types in proxmox and not knowing the path to proceed.

I have the following drives and datastores to create and would like to know your opinions and advices on the ZFS storage type to create:

Server 1:
Dell R720xd - dual E5-2680 v2 - 192GB RAM - Dell Perc H710p IT mode - 40Gbe Nic
PBS primary rapid backup Datastore: 12x 2TB samsung 860EVO
(need speed and IOPS without loosing to much space)

Server 2:
Dell R720xd - dual E5-2680 v2 - 192GB RAM - Dell Perc H710p IT mode - Dell Perc H810 IT mode - 40Gbe Nic
PBS secondary backup Datastore: 24x 500GB samsung 860EVO
(without loosing to much space and keeping some speed)

PBS Archive backup Datastore: 24x 2TB 7200rpm Western digital in an SATA 6 Netapp disk shelf, connected to Dell H810
(Resiliance/safety without loosing to much speed)


What would you do ?

PS: The drives are all used from retired cluster and in the Server 1 the datastores space I can adjust with more drives if needed.

Thanks
 
No use ZFS with EVO consumer drives. Just search the forums for the myriads of problems with those drives and ZFS.
Thanks for the reply.

You are refering/comparing to the enterprise SSDs or is some incompatibility with ZFS ?

For the scenario I have, and ok considering enterprise SSDs, what are your toughts on the type of datastores to create/use ?

Thanks for the help
 
Hello @gaiex I would recommend to use the open-e storage calculator (choose singlehost) https://www.open-e.com/storage-and-raid-calculator/joviandss/ to get a feeling of how ZFS performance is which specific setups etc. Usually you increase the storage performance by using multiple-vdevs across those the data gets striped (even when using mirrors).

Heres a example: This is a 2way mirror across 6 vdevs, heres also a estimation about the usable diskspace etc. You can only loose one disk per vdev but you get good performance. This works the same on other RAID-Z levels so play around a little to get the best possible for you.


1693571454486.png

But please never use consumer grade ssds for important data. They have no powerloss protection and are not made for lots of writes etc. they die very fast.
 
Thanks for the feedback.

That website tool @jsterr mentioned helped a lot.

To benchmark the zfs pool is the bellow comand ok? How do I know its runing on the correct datastore ?
I'm testing the 24x SSD 500GB in Raidz1 with 4 vdevs, I think it's that way to say.

fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting

Captura de ecrã 2023-09-02 003734.jpg
 
How do I know its runing on the correct datastore ?
By pointing the "--filename=temp.file" to a mountpoint of your pool.

And you can also use the proxmox-backup-client to benchmark your storage. Maybe more useful than some synthetic tests.
 
Last edited:
Thanks for the help and for the feedback.

Going to jump over here because of other potencial problem, lack of speed of the datastore
https://forum.proxmox.com/threads/poor-disk-perfomance-zfs-on-dell-perc-h310mini.131943/

Tested in the two servers, one with mdam raid and other with Raidz1 with 4 vdevs as picture posted.
Performace is similiar in both systems, that is strange but maybe something wrong on the Dell Perc H710p flashed to IT mode on both systems.

ZFS Raidz1 4x vdevs (24x500gb SSD)

Code:
fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
TEST: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=1): [R(1)][100.0%][r=2002MiB/s][r=2002 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=82448: Fri Sep  1 23:45:32 2023
  read: IOPS=1906, BW=1906MiB/s (1999MB/s)(10.0GiB/5372msec)
    slat (usec): min=373, max=1934, avg=521.21, stdev=112.28
    clat (usec): min=3, max=52587, avg=16086.89, stdev=3205.34
     lat (usec): min=481, max=54521, avg=16608.10, stdev=3303.06
    clat percentiles (usec):
     |  1.00th=[10159],  5.00th=[14877], 10.00th=[14877], 20.00th=[15008],
     | 30.00th=[15270], 40.00th=[15664], 50.00th=[15664], 60.00th=[15795],
     | 70.00th=[15795], 80.00th=[15926], 90.00th=[16057], 95.00th=[27132],
     | 99.00th=[28181], 99.50th=[28705], 99.90th=[44827], 99.95th=[48497],
     | 99.99th=[51643]
   bw (  MiB/s): min= 1002, max= 2016, per=99.50%, avg=1896.60, stdev=314.65, samples=10
   iops        : min= 1002, max= 2016, avg=1896.60, stdev=314.65, samples=10
  lat (usec)   : 4=0.05%, 500=0.02%, 750=0.03%, 1000=0.01%
  lat (msec)   : 2=0.09%, 4=0.20%, 10=0.59%, 20=93.78%, 50=5.21%
  lat (msec)   : 100=0.04%
  cpu          : usr=0.73%, sys=99.24%, ctx=8, majf=4, minf=8207
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=1906MiB/s (1999MB/s), 1906MiB/s-1906MiB/s (1999MB/s-1999MB/s), io=10.0GiB (10.7GB), run=5372-5372msec



MDRAID: 9x2Tb SSD

Code:
/mnt/md127# fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
TEST: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=1): [R(1)][100.0%][r=2202MiB/s][r=2202 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=489685: Sat Sep  2 16:53:24 2023
  read: IOPS=2010, BW=2010MiB/s (2108MB/s)(10.0GiB/5094msec)
    slat (usec): min=334, max=1780, avg=493.93, stdev=138.53
    clat (usec): min=3, max=51433, avg=15256.18, stdev=4167.27
     lat (usec): min=446, max=53213, avg=15750.10, stdev=4297.82
    clat percentiles (usec):
     |  1.00th=[ 9241],  5.00th=[13304], 10.00th=[13566], 20.00th=[13698],
     | 30.00th=[14091], 40.00th=[14091], 50.00th=[14222], 60.00th=[14222],
     | 70.00th=[14222], 80.00th=[14353], 90.00th=[15270], 95.00th=[26870],
     | 99.00th=[27395], 99.50th=[27657], 99.90th=[43779], 99.95th=[47449],
     | 99.99th=[50594]
   bw (  MiB/s): min= 1024, max= 2236, per=99.60%, avg=2002.20, stdev=435.62, samples=10
   iops        : min= 1024, max= 2236, avg=2002.20, stdev=435.62, samples=10
  lat (usec)   : 4=0.05%, 500=0.05%, 1000=0.05%
  lat (msec)   : 2=0.10%, 4=0.20%, 10=0.63%, 20=89.18%, 50=9.73%
  lat (msec)   : 100=0.02%
  cpu          : usr=1.16%, sys=98.76%, ctx=7, majf=0, minf=8207
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=2010MiB/s (2108MB/s), 2010MiB/s-2010MiB/s (2108MB/s-2108MB/s), io=10.0GiB (10.7GB), run=5094-5094msec
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!