Hello,
I am having issues with bad ZFS performance on 4 identical server with following hardware:
DELL PowerEdge R6525, CPU 2x AMD EPYC 7262 8-Core Processor, 256GB RAM, HBA Symbios Logic SAS3416, 2 x SSD KPM5XVUG480G, 2 x SAS AL15SEB24EQY.
Software version of PVE:
SSD drives are in ZFS RAID1, same for SAS drives.
I am testing with fio:
On SSD mirror, the performance is good, but on SAS mirror fio tells me it will take like hour and half to complete.
I came accross https://zfsonlinux.topicbox.com/groups/zfs-discuss/T974149177bf5463c-M06b290f6709c4d2efc94ad52 and started suspecting the HBA controller as problematic. If anyone has such controller and can run tests, can you share results?
I am open to ideas how to fix this.
Thanks in advance.
I am having issues with bad ZFS performance on 4 identical server with following hardware:
DELL PowerEdge R6525, CPU 2x AMD EPYC 7262 8-Core Processor, 256GB RAM, HBA Symbios Logic SAS3416, 2 x SSD KPM5XVUG480G, 2 x SAS AL15SEB24EQY.
Software version of PVE:
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.3-7 (running version: 6.3-7/85c4930a)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-9
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.1.3-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-2
pve-cluster: 6.2-1
pve-container: 3.3-5
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
SSD drives are in ZFS RAID1, same for SAS drives.
I am testing with fio:
fio --rw=write --ioengine=sync --fdatasync=1 --directory=/XXX--size=500m --bs=2300 --name=mytest
On SSD mirror, the performance is good, but on SAS mirror fio tells me it will take like hour and half to complete.
I came accross https://zfsonlinux.topicbox.com/groups/zfs-discuss/T974149177bf5463c-M06b290f6709c4d2efc94ad52 and started suspecting the HBA controller as problematic. If anyone has such controller and can run tests, can you share results?
I am open to ideas how to fix this.
Thanks in advance.