I gave up on trying to find out what was the reason...
Moved back on rebuilt 2.6 kernel with deleted newest and bugged nic drivers (had an issue with bridge+4nics bond kernel panics).
1) 5 node cluster dies with new kernel.
Tested a bit packages built from git sources before they appeared in pvetest... (also with kernel from pvetest, and built from sources with different nic's driver versions - all the same)
After updating 5 node cluster kernels to 3.10.0-3 got cluster split...
And is pve-manager 3.2-10 + novnc-pve from git working with qemu\kvm packages from repo?
Updated all packages on my testing proxmox server to the latest version from git, and got in love with novnc :-) Want to get it on all my servers, but afraid of bugs in qemu 2 and other new stuff...
And by...
Avoid using more than 2 nics in bond on production. Not stable.
We get kernel panics on different servers \ different nics: http://forum.proxmox.com/threads/18382-Kernel-panics-on-all-kernels-with-4-bonded-nics
Re: Kernel panic on 3.2 with HP NC360T
We discovered the problem.
Kernel panics repeated on all our servers with different nics (broadcom and intel) where the network was build on bond with more than 1 nics (4 nics in our case).
As soon as we lowered the count of bonded nics to 1, kernel...
After updating on the latest version we get kernel panic on heavy HP NC360T (Intel 82571EB chipset) nic load.
root@proxmox:~# pveversion -v
proxmox-ve-2.6.32: 3.2-124 (running kernel: 2.6.32-28-pve)
pve-manager: 3.2-2 (running version: 3.2-2/82599a65)
pve-kernel-2.6.32-27-pve: 2.6.32-121...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.