4.2 perf is lower then 4.1

stefws

Renowned Member
Jan 29, 2015
302
4
83
Denmark
siimnet.dk
Upgraded 4 of 7 nodes today only to discover than especially two VMs running (Palo Alto - VM200 FWs) use much more CPU than when on pve 4.1 :(

Pic 1 here shows VM usage last 24 hour and the jump when migrated onto 4.2.22 around 17:00, the last high jump is me introducing more load on the FW.

Pic 2 here shows VM usage past hour where the load falls down when migrating the VM onto a hypervisor still on pve 4.1.22.

Will postpone upgrading the last three hypervisor nodes for now.

Any clues why this more than marginal difference?
 
Pic 3 show past hour usage and approx a 5 fold drop in cpu usage after migrating my hot spare FW VM from pve 4.2.2 back to a pve 4.1.22.

root@n7:~# pveversion --verbose
proxmox-ve: 4.1-39 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-22 (running version: 4.1-22/aca130cf)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-39
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-36
qemu-server: 4.0-64
pve-firmware: 1.1-7
libpve-common-perl: 4.0-54
libpve-access-control: 4.0-13
libpve-storage-perl: 4.0-45
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-9
pve-container: 1.0-52
pve-firewall: 2.0-22
pve-ha-manager: 1.0-25
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie
openvswitch-switch: 2.3.2-2

root@n1:~# pveversion --verbose
proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)
pve-kernel-4.4.6-1-pve: 4.4.6-48
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-72
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-14
pve-container: 1.0-62
pve-firewall: 2.0-25
pve-ha-manager: 1.0-28
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
openvswitch-switch: 2.3.2-3
 
It's not the same with our other VMs running CentOS 6.7 w/elrepo kernel-ml 4.5.1.

Assume that the linux distro under Palo Alto VM series is also Redhat/CentOS 6 based only older maybe v.6.3, at least it's based on kernel 2.6.32. PA-VM200 crashes randomly when vNIC are Intel e1000 emulation types, but not with rtl8139.

I've tried to compare perf. diff between these two vNIC types on 4.2.2, but find no visual diffenrece and only rtl8139 is stabli currently. Properly due issue in older kernel+e1000 driver base compared w/our home built CentOS VMs all using e1000 vNIC wo/issues.
 
I maybe could... only it shows it self no better perf. wise :confused:, it might be the older VM guest drivers vs the newer KVM that's causing this 5x fold CPU hit. Also dunno if this also unsupported NIC by PA (they only says e1000) is stabil...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!