The first zfs, so sanity check about performance

czechsys

Renowned Member
Nov 18, 2015
426
44
93
Testing a new server with our the first zfs setup. Just asking for sanity check for default setup performance of the nvme zfs because we are thinking about adding hw raid card. Main purpose of the server will be GPU AI computing.

CPU: AMD EPYC 9354 32-Core Processor
RAM: 512 GB
Disks: all connected to MB directly, 2x Kinsgton DC600M SATA, 2x Samsung PM9A3 960GB NVMe PCIe4x4
ZFS pools:
ARC=16GB
mirror (raid1)
ashift=9
compression=off
sync=yes
dedup=off

Test used with our standard fio parameters from root mount of the pools directly on the PVE.

Code:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randread
Kingston: read limited by 1 core 100% utilization
  read: IOPS=72.9k, BW=285MiB/s (299MB/s)(4096MiB/14376msec)
   bw (  KiB/s): min=277128, max=357160, per=98.13%, avg=286301.04, stdev=16450.78, samples=28
   iops        : min=69282, max=89290, avg=71575.25, stdev=4112.68, samples=28

Samsung: read limited by 1 core 100% utilization
  read: IOPS=72.9k, BW=285MiB/s (299MB/s)(4096MiB/14386msec)
   bw (  KiB/s): min=277744, max=357440, per=98.10%, avg=286001.71, stdev=15898.45, samples=28
   iops        : min=69436, max=89360, avg=71500.43, stdev=3974.61, samples=28


fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite
Kingston:
  write: IOPS=6263, BW=24.5MiB/s (25.7MB/s)(4096MiB/167410msec); 0 zone resets
   bw (  KiB/s): min=18000, max=170544, per=97.73%, avg=24485.87, stdev=13028.94, samples=334
   iops        : min= 4500, max=42636, avg=6121.47, stdev=3257.24, samples=334

Samsung:
  write: IOPS=19.8k, BW=77.5MiB/s (81.2MB/s)(4096MiB/52872msec); 0 zone resets
   bw (  KiB/s): min=51768, max=314496, per=98.57%, avg=78197.41, stdev=33727.23, samples=105
   iops        : min=12942, max=78624, avg=19549.35, stdev=8431.81, samples=105


fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
Kingston: read limited by 1 core 100% utilization
  read: IOPS=18.6k, BW=72.6MiB/s (76.1MB/s)(3070MiB/42309msec)
   bw (  KiB/s): min=49592, max=205224, per=100.00%, avg=74304.10, stdev=29527.13, samples=84
   iops        : min=12398, max=51306, avg=18576.02, stdev=7381.78, samples=84
  write: IOPS=6208, BW=24.2MiB/s (25.4MB/s)(1026MiB/42309msec); 0 zone resets
   bw (  KiB/s): min=17456, max=68800, per=100.00%, avg=24831.43, stdev=9797.99, samples=84
   iops        : min= 4364, max=17200, avg=6207.86, stdev=2449.50, samples=84

Samsung: read limited by 1 core 100% utilization
  read: IOPS=53.5k, BW=209MiB/s (219MB/s)(3070MiB/14700msec)
   bw (  KiB/s): min=167040, max=356488, per=98.60%, avg=210856.00, stdev=38356.78, samples=29
   iops        : min=41760, max=89122, avg=52714.00, stdev=9589.20, samples=29
  write: IOPS=17.9k, BW=69.8MiB/s (73.2MB/s)(1026MiB/14700msec); 0 zone resets
   bw (  KiB/s): min=55512, max=117584, per=98.62%, avg=70488.55, stdev=12589.05, samples=29
   iops        : min=13878, max=29396, avg=17622.14, stdev=3147.26, samples=29
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!