Performancetest (zfs) between pve5.4 + pve6.0

udo

Distinguished Member
Apr 22, 2009
5,981
203
163
Ahrensburg; Germany
Hi,
I do yesterday an short performance test between pve5.4 + pve6.0 mainly to see, if the zfs performace are better, because we have some trouble with mysql-vms on zfs (ssd zfs raid1).

Test:
hardware:
Dell R610 with 16GB Ram + HT on - 16 x Intel(R) Xeon(R) CPU X5560 @ 2.80GHz (2 Sockets)
2 * Intel SSD DC S4500 1.92TB as zfs-raid1 on HBA

options zfs zfs_arc_min=4294967296
options zfs zfs_arc_max=6442450944

Test1: pve5.4 non-subscription last updates
Test2: pve6.0.5 non-subscription last updates (dist-upgrade)

Inside only one VM (ubuntu 14.04, 8GB Ram, 4 vCores) run several sysbench test (Threads 1 - 16) (the same for both pve-versions).

The iowait is a little bit better, but the overall performance is not so good and the load also higher with the new version.
The test takes 4 minutes longer with pve6 (44 min to 48min).

OK - i have run the test only one times, but the result is not such good as I expected...

Any possibilties to tune something?

Screenshot shows both runs (the orange one is pve6)

Udo
 

Attachments

  • mysql_run_sysbench_r610_zfs_ssds_pve5.4_u_pve6.0.png
    mysql_run_sysbench_r610_zfs_ssds_pve5.4_u_pve6.0.png
    131.8 KB · Views: 35
ZFS had some issues with SIMD acceleration in recent linux kernels, including the one initially shipped with PVE 6.0.

Since version 'pve-kernel-5.0.21-1-pve' we have included a fix, so you could try to run your benchmarks again with this version installed to see if it makes a difference.
 
Any possibilties to tune something?

Yes. You can also tune the metadata min/max. Because you try to test a mysql, you can also setup 16k volblock size at least for var/lib/mysql, metadata only for the same /var/lib/mysql, mysql log and OS(linux) with 128 volblocksize, and others.
 
Hi,
@guletz: I will try volblock-sizes later.

With the new Kernel, the test takes 40m27.5s and the load looks much better.

Code:
pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.21-1-pve)
pve-manager: 6.0-6 (running version: 6.0-6/c71f879f)
pve-kernel-5.0: 6.0-7
pve-kernel-helper: 6.0-7
pve-kernel-5.0.21-1-pve: 5.0.21-1
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.11-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-4
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-7
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-64
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-7
pve-cluster: 6.0-5
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve2
Udo
 

Attachments

  • pve6_5.0.21.png
    pve6_5.0.21.png
    158.6 KB · Views: 17