Proxmox 5 beta, Areca 1883i, 10x LESS Fsync/Sec vs 4.x

mmenaz

Renowned Member
Jun 25, 2009
835
25
93
Northern east Italy
I've a new server where I've installed an Areca RAID controller 1883i, 4 x 1TB RAID10 WD Enterprise Gold and 2 x 2TB RAID1 WD Enterprise Gold. The controller has a "supercap" and Write Back cache enabled, raid has been initialized this night.
With Proxmox 4 the same controller with 4x500GB RAID10 consumer HD had 4200 FSYNCS/SECOND, with 4x Velociraptor had 9100.
Kernel module arcmsr is v1.30.00.22-20151126 and was v1.30.00.04-20140919 with Proxmox 4.
The question is, has pvepverf changed something? Is there any known issue with Areca controllers? I would like to avoid remove Proxmox 5 and install 4.x just to check
Code:
# pveperf
CPU BOGOMIPS:      67200.96
REGEX/SECOND:      1934014
HD SIZE:           93.99 GB (/dev/mapper/pve-root)
BUFFERED READS:    571.12 MB/sec
AVERAGE SEEK TIME: 7.03 ms
FSYNCS/SECOND:     330.49
DNS EXT:           67.11 ms
DNS INT:           71.96 ms
and
Code:
# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro,noatime 0 1
finally
Code:
# pveversion -v
proxmox-ve: 5.0-2 (running kernel: 4.10.1-2-pve)
pve-manager: 5.0-5 (running version: 5.0-5/c155b5bc)
pve-kernel-4.10.1-2-pve: 4.10.1-2
libpve-http-server-perl: 2.0-1
lvm2: 2.02.168-pve1
corosync: 2.4.2-pve2
libqb0: 1.0.1-1
pve-cluster: 5.0-3
qemu-server: 5.0-1
pve-firmware: 2.0-1
libpve-common-perl: 5.0-3
libpve-guest-common-perl: 2.0-1
libpve-access-control: 5.0-1
libpve-storage-perl: 5.0-2
pve-libspice-server1: 0.12.8-3
vncterm: 1.4-1
pve-docs: 5.0-1
pve-qemu-kvm: 2.7.1-500
pve-container: 2.0-4
pve-firewall: 3.0-1
pve-ha-manager: 2.0-1
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.7-500
lxcfs: 2.0.6-pve500
criu: 2.11.1-1~bpo90
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
 
If you have a BBU, you may set barrier=0 (see 'man ext4')
Since I've BBU, barrier=0 was never necessary, writeback cache took care of the performance drawback.
I've now installed 4.4 on the same hardware, same fstab config, and I've fsync of 8800!
So there is something badly broken in 5.0 beta about performance, at least with areca, or the measurement of pveperf has changed
In short I want to know if keep going with 5.0 installation and be sure that VM I/O performances will be good, or I'd better stay on 4.4 and in the future never upgrade because of a areca performance problem with 5.0 version.
 
If you have a BBU, you may set barrier=0 (see 'man ext4')
I've done a "better than nothing" test: in my production server I've created 2 VM, storage in thin-lvm, 32GB, 2GB ram.
One has 4.4, the other 5.0beta, default configuration (no updates, no config changes, no new packages installed, just bare metal installation and reboot)
On average of 4 tests, I've 4.4 with FSYNC of 2300, while 5.0 has 300!
So or 5.0 is much slower in I/O, or pveperf does not provide results comparable with previous Proxmox standards.
Any clue?
 
strange...
But what importance has this? We are comparing a bare metal installation of prox 4, ext4, barrier, with one of prox 5, ext4, barrier and performance are dramatically lower.
Just try yourself, create 2 vms (same resources, same storage type and destination) and install prox4 and 5 then compare (virtio-scsi for the record). The same on real hardware (not tested with sata controller, just with areca so far)
Sure you have some server with 5 beta installed, how is pveperf there compared with 4.x?
 
FOUND the problem!
Proxmox 5 defaults with cfq scheduler instead of deadline!
root@pve5test:~# grep . /sys/block/sd*/queue/scheduler
noop deadline [cfq]
In the vm I did then
root@pve5test:~# echo deadline > /sys/block/sda/queue/scheduler
root@pve5test:~# grep . /sys/block/sd*/queue/scheduler
noop [deadline] cfq
and repeated the tests, now dd latency is 1.0 Mb, and pveperf fsync is 2232.95
I will reinstall 5 in the physical server and change scheduler.
Probably is "a bug" that needs to be fixed, or do Proxmox have changed his mind and is going back to cfq (that was default in the old past)?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!