Kernel panic pve-kernel-2.6.32-48-pve at mount NFS - Share

pwxst

Renowned Member
Feb 29, 2012
8
0
66
Austria
Hi,

since the last update to pve-kernel-2.6.32-48-pve I got a kernel panic when I start multiple OpenVZ container which use the same share of the same NFS - Server.

If I start only one container everything works as it should, but as soon I try to start one additional container which mounts the same NFS share, the hole system hangs and I got a kernel panic.
Attached you will find my system details and the kernel panic output.

Can anyone suggest anything to help?

With the kernel 2.6.32-47-pve everything works as it should, what has changed between the two versions?

Best regards, Peter
 

Attachments

  • kp-martes-20161107-134032.png
    kp-martes-20161107-134032.png
    168.4 KB · Views: 9
  • pvesysinfo-03-2016-11-07-133455.txt
    56 KB · Views: 5
Last edited:
Hi,
I also get a (first proxmox) kernel panic at every first boot processes with this pve-kernel-2.6.32-48-pve, with a single small Supermicro server without NFS,
until now i don’t have the time to to look deeper. My last kernel pve-kernel-2.6.32-46-pve also works fine

best regards,
maxprox
 
at first we have to look to our Hardware:

Supermicro X8SIL, BIOS 1.1 05/27/2010
Intel® 3400 chipset, LGA 1156 socket, CPU: Intel Xeon X3450, 2.67GHz
Ethernet controller: 2x Intel® 82574L Gigabit Ethernet Controllers, in bond_mode 802.3ad (LACP,
Intel Pro 1000 Network driver e1000e), 1x Adaptec 3405 Raid Controller with 4x Hitachi 1 TB in Raid10

And I remain for now with kernel: 2.6.32-46-pve
pveversion -v
proxmox-ve-2.6.32: 3.4-184 (running kernel: 2.6.32-46-pve)
pve-manager: 3.4-15 (running version: 3.4-15/e1daa307)
pve-kernel-2.6.32-44-pve: 2.6.32-173
pve-kernel-2.6.32-48-pve: 2.6.32-184
pve-kernel-2.6.32-45-pve: 2.6.32-174
pve-kernel-2.6.32-46-pve: 2.6.32-177
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-20
qemu-server: 3.4-9
pve-firmware: 1.1-6
libpve-common-perl: 3.0-27
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-35
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-28
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

For me the solution will be to change the whole server to a new one with Proxmox 4.x
in this case the problem is not the kernel panic but the age of the sever (> 5 years)
 
Last edited:
What you mean a recent kernel version? The one I posted above (2.6.32-48-pve) which I got it is the latest and I got the problems with that, not the previous kernels.
 
What you mean a recent kernel version? The one I posted above (2.6.32-48-pve) which I got it is the latest and I got the problems with that, not the previous kernels.

I only had the problems wit kernel version proxmox-ve-2.6.32: 3.4-182 (running kernel: 2.6.32-48-pve),
since I have updated to proxmox-ve-2.6.32: 3.4-184 (running kernel: 2.6.32-48-pve), the problem was gone.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!