ZFS Epyc and NVMe - poor performance

flowbergit

Renowned Member
Jun 3, 2015
40
0
71
Hi,

I tried with ZFS raid10 with 12 NVMe Samsung 2TB -> ashift set to 13 compression lz4. Vm host: w2019 standard with Virtio SCSI controller, and results are poor.

Similar configuration with SAS RAID 10 with 8 hard disk and embedded raid controller result esspecialy for write is much better. Any best practice for repair this situation?
 

Attachments

  • ZFS_raid10.png
    ZFS_raid10.png
    108.8 KB · Views: 34
  • ZPool.png
    ZPool.png
    142.4 KB · Views: 30
  • SAS_RAID10.png
    SAS_RAID10.png
    110.3 KB · Views: 33
  • FIO_BENCHMARK.png
    FIO_BENCHMARK.png
    138.9 KB · Views: 34
  • FIO_BENCHMARK_READ.png
    FIO_BENCHMARK_READ.png
    140.2 KB · Views: 34
Forget CrystalDiskMark and use fio as well in the VM [0].
Please provide the output of pveversion -v as well as the VM config qm config <VMID>.

When using fio, we recommend building a base line by running the command for at least 10 minutes, writing in a file that's bigger than any possible cache involved (60-100G) and running it with iodepth=1 and numjobs=1 as this better reflects the worst case. Also consider setting the `direct` option.

[0] https://bsdio.com/fio/
 
  • Like
Reactions: Dunuin
Code:
root@0-supermicro:~# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-3-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-8
pve-kernel-5.13: 7.1-6
pve-kernel-5.13.19-3-pve: 5.13.19-6
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-2
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-1
proxmox-backup-client: 2.1.3-1
proxmox-backup-file-restore: 2.1.3-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-5
pve-cluster: 7.1-3
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-4
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1

Code:
root@0-supermicro:~# qm config 167
agent: 1
balloon: 0
bios: ovmf
boot:
cores: 8
ide0: none,media=cdrom
memory: 73728
name: Test
net0: virtio=06:04:C7:A0:0E:40,bridge=vmbr1,firewall=1,tag=900
numa: 1
ostype: win10
scsi0: ZFS-nvme:vm-167-disk-0,iothread=1,size=500G,ssd=1
scsi1: ZFS-nvme:vm-167-disk-1,format=raw,iothread=1,size=150G,ssd=1
scsi2: ZFS-nvme:vm-167-disk-2,format=raw,iothread=1,size=75G,ssd=1
scsi3: ZFS-nvme:vm-167-disk-3,format=raw,iothread=1,size=60G,ssd=1
scsi4: ZFS-nvme:vm-167-disk-4,backup=0,format=raw,size=250G
scsihw: virtio-scsi-pci
smbios1: uuid=8ee700b7-c77d-4423-a383-f4bce25200e4
sockets: 1
vmgenid: 9439a61b-d032-4466-9a65-8f79752b0218
root@0-supermicro:~#
 

Attachments

  • Write_w2019.png
    Write_w2019.png
    259.2 KB · Views: 20
Last edited:
Please also set `numjobs` to `1` and increase the file size to ~60G.

You configured `iothread` on the disks, but for this to work, you also have to switch the controller to virtio-scsi-single.
 
Very slow performance on Windows Host, acccording to the test on Hypervisor

Setup: virtio-scsi-single
 

Attachments

  • iothread_enable_read.png
    iothread_enable_read.png
    259 KB · Views: 35
  • iothread_enable.png
    iothread_enable.png
    256.9 KB · Views: 32
You ever figure this out? Also seeing poor performance with 12 NVME drives and an EPYC.
 
Last edited:
You ever figure this out? Also seeing poor performance wit 12 NVME drives and an EPYC.
if you don't want poor performance with zfs (or ceph or btrfs), don't use consumer ssd. zfs need fast syncronous write for his journal, you need datacenter ssd with a supercapacitor.

if not, do hardware raid, or software raid with mdadm.
 
You could also try compare a benchmark with "primarycache=metadata" against one with "primarycache=all" set and see if it helps.
 
Hi,

1. I would try to format inside the VM one of raw vHDD using 4K block size and test again inside VM with fio
2. If you will not use any DB in this VM(or other VMs), testing with directio(direct=1) is a nonsense!
- if you will use a DB in the future, then will be better to use a dedicated SLOG(avoid double write of the same data in the pool), for better IOPs

Good luck / Bafta !
 
" 12 NVMe Samsung 2TB" -> so consumer drives ? (as DC grade ssd are around 1,6TB).

You can have good write performance with zfs, as you need fast syncronous write for the zfs journal. (and consumer ssd don't have supercapacitor, so need to flush their buffer each time zfs is writing to his journal)


Your raid controller have better performance, because I think it have a cache + battery ? (so working like a datacenter ssd with supercapacitor)
 
" 12 NVMe Samsung 2TB" -> so consumer drives ? (as DC grade ssd are around 1,6TB).

You can have good write performance with zfs, as you need fast syncronous write for the zfs journal. (and consumer ssd don't have supercapacitor, so need to flush their buffer each time zfs is writing to his journal)


Your raid controller have better performance, because I think it have a cache + battery ? (so working like a datacenter ssd with supercapacitor)

Hi,

Agree 100% !

Good luck / Bafta !
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!