PVE Kernel crash on HP Bladeserver after upgrade to PVE 2.2

No, the G6 server now shows the same output in the syslog. But it's working. I'll try with clean installation.
 
On the G6 are you running the most current firmware. HP on a number of the boxes had PSOD with various flavors of esx. I on some of my hp servers thats the most common topic.
 
No, the G6 server now shows the same output in the syslog. But it's working. I'll try with clean installation.

I installed a clean Proxmox VE 2.2 and upgraded it to 2.3 through pvetest repository. The situation is the same. During backup of a Windows KVM guest there is a lot of output to syslog similar to kernel panics but the backup continues running and the the progress is shown correct. It finished correctly as well.
Here is the output of dmesg if it is needed
 

Attachments

  • server.txt.zip
    12.9 KB · Views: 3
Last edited:
just a fast google search give this "Generic Receive Offload (GRO) on Physical Network Interfaces (PIF) can flood the log file with the following messages:WARNING: at net/core/dev.c:1594 skb_gso_segment+0x1a1/0x250(). This is due to a bug that incorrectly clears the feature flags in the netback driver"

disable GRO and test again (with ethtool)
 
Today I upgraded to the latest stable packages and the server's been running for 2 hours with no problems so far. I'll let it running for 2 more days and if there are still no problems I'll add it to the cluster again.
 
These are the version details:

root@pmx2:~# pveversion -v
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-96
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-7
vncterm: 1.0-4
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-10
ksm-control-daemon: 1.1-1
 
Still running without any problems. I added it to the cluster again and migrated a test VM onto it. All went smoothly. I consider the problem solved (whatever it was).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!