Kernel Panic Debian Lenny as VM

oerb

New Member
Nov 28, 2009
14
0
1
Hi,

I have here 2 Proxmox Ve working fine for some Month. Now we want to migrate OpenXchangeserver5 to 6 on to an new Debian Lenny. Installation and so on makes no Problems but when we beginn to synch the Imapfolders Debian Lenny gets a kernel panic by the IO-overload. We test this in an Debian 64 bit and 32 bit, it's all the same. I had have help with the Migration by a professional team with diverse experience in the Procedere, but they never had this Problem before.

Do anyone kow what could cause this?

:confused:
 
Host:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 2 102 249 4639 0 0 0 12 7 14616 27 3 70 0
1 0 2 102 249 4639 0 0 0 0 7 15267 27 3 71 0
1 0 2 102 249 4639 0 0 0 0 5 14400 27 3 71 0
3 0 2 102 249 4639 0 0 0 0 7 15061 26 3 71 0
1 0 2 102 249 4639 0 0 0 16 9 15042 27 3 71 0
2 0 2 102 249 4639 0 0 0 40 18 15342 26 3 71 0
2 0 2 102 249 4639 0 0 4 16 12 16306 28 2 70 0
1 0 2 102 249 4639 0 0 0 4 7 14879 27 2 71 0
1 0 2 102 249 4639 0 0 0 28 10 15106 27 3 70 0
2 0 2 102 249 4639 0 0 0 32 13 13790 27 2 71 0
1 0 2 102 249 4639 0 0 0 8 10 13520 26 2 71 0
1 0 2 102 249 4639 0 0 4 92 28 14990 27 3 70 0
1 0 2 102 249 4639 0 0 0 16 8 13601 26 3 71 0
1 0 2 102 249 4639 0 0 0 0 5 14193 27 3 71 0
2 0 2 102 249 4639 0 0 0 116 22 13963 27 3 70 0
2 0 2 102 249 4639 0 0 0 0 10 14452 27 2 71 0
2 0 2 102 249 4639 0 0 0 24 10 14121 27 2 71 0
1 0 2 102 249 4639 0 0 0 0 5 15046 27 2 71 0
1 0 2 102 249 4639 0 0 0 0 5 15460 27 2 72 0
1 0 2 102 249 4639 0 0 0 16 8 15117 26 3 71 0
1 0 2 102 249 4639 0 0 0 12 12 15873 26 2 71 0

the VM:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 0 3398 35 109 0 0 620 88 31 211 6 6 87 1
0 0 0 3398 35 109 0 0 0 0 7 115 0 0 100 0
0 0 0 3398 35 109 0 0 0 360 34 123 0 0 100 0
0 0 0 3398 35 109 0 0 0 0 3 114 0 0 100 0
0 0 0 3398 35 109 0 0 0 0 3 121 0 0 100 0
0 0 0 3398 35 109 0 0 0 0 4 128 0 0 100 0
0 0 0 3398 35 109 0 0 0 44 7 134 0 0 100 0
0 0 0 3398 35 109 0 0 0 80 15 115 0 0 100 0
0 0 0 3398 35 109 0 0 0 0 6 162 0 0 100 0
0 0 0 3398 35 109 0 0 0 0 6 105 0 0 100 0
0 0 0 3398 35 109 0 0 0 0 6 115 0 0 100 0
0 0 0 3398 35 109 0 0 0 16 7 111 0 0 100 0
0 0 0 3398 35 109 0 0 0 0 3 119 0 0 100 0
0 0 0 3398 35 109 0 0 0 36 9 127 0 0 100 0
0 0 0 3398 35 109 0 0 0 0 5 202 2 0 98 0
1 0 0 3398 35 109 0 0 0 0 4 117 2 0 98 0
0 0 0 3398 35 109 0 0 0 0 3 155 2 0 98 0
0 0 0 3398 35 109 0 0 0 0 4 134 0 0 100 0
0 0 0 3398 35 109 0 0 0 0 3 116 0 0 100 0
0 0 0 3398 35 109 0 0 0 0 4 102 0 0 100 0
0 0 0 3398 35 109 0 0 0 16 5 123 0 0 100 0

Powertop Sieht so aus:
Wakeups-from-idle per second : 22.7 interval: 10.0s
no ACPI power usage estimate available

Top causes for wakeups:
65.0% ( 44.4) java : futex_wait (hrtimer_wakeup)
5.9% ( 4.0) <kernel module> : usb_hcd_poll_rh_status (rh_timer_func)
4.4% ( 3.0) <kernel IPI> : Rescheduling interrupts
4.1% ( 2.8) <interrupt> : virtio0, eth0
2.9% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog)
2.9% ( 2.0) <kernel module> : sym_timer (sym53c8xx_timer)
2.9% ( 2.0) <kernel core> : schedule_delayed_work_on (delayed_work_timer_fn)
2.2% ( 1.5) mysqld : schedule_timeout (process_timeout)
1.5% ( 1.0) dovecot : schedule_timeout (process_timeout)
1.5% ( 1.0) xfsaild : schedule_timeout (process_timeout)
1.5% ( 1.0) apache2 : schedule_timeout (process_timeout)
1.2% ( 0.8) xfsbufd : schedule_timeout (process_timeout)
1.0% ( 0.7) <interrupt> : uhci_hcd:usb1, sym53c8xx

Suggestion: Enable USB autosuspend by pressing the U key or adding
usbcore.autosuspend=1 to the kernel command line in the grub config
 
Last edited:
hmm Xrisse I have a Freeze in Console an must restart the Machine via ProxmoxVE. The CPU-Load is only on the defined CPU Number on Load of 100% so it might be not a Threading issue. I test this with one or two CPU defined in VM.
 
hmm Xrisse I have a Freeze in Console an must restart the Machine via ProxmoxVE. The CPU-Load is only on the defined CPU Number on Load of 100% so it might be not a Threading issue. I test this with one or two CPU defined in VM.

I think you have the problem all of us have,don't use virtio.
 
So after a lot of trouble and moving the machines around I' could tell a litlebit more. I' have sometimes connectionlost of the Client's to the VM which is a Win2008 Server with IDE drive. @laradji I never had used virtio for Harddisk so that is never the clue. The virtio output was only for Networkcard. The Connectionloss times with a sshconnection from the Masternode to the Clientnode of the ProxmoxVE-Cluster where the Machine is working. Exact at this time you could see 100% on the Machines. Interesting on that too is, when I've running Linuxbased kvm machins on the same Node with an Win2008 or Win2009 based Machine they get 100% CPU on the taken CPU for the Machines mostly by a littlebit more use from some users (special DB action that in a non clustered Proxmoxve with out other machines take only about 30% CPU). And that has nothing to do with the Use. I could see this at Weekend when no one is logged in too. I captured this Information by nrgraph and nagios.

All I think this might be the same Problem as I have whith in the Debian VM. End I put this into a Havy Clusternode which has more the 5 times more of Hardewarepower then he has have to for this two Machines. Its horrible to see that this simple use kills everithing in the node.
 
Last edited:
hmm might be Interesting too that all machines was migrated from VMWARESERVER2 and are qcowimages.

I'm realy in trouble with this. May someone help?
 
O O :(

Reading could help sometimes. I found the informatione about the e1000 Networkcard in the Wiki after searching around. Will test that first before annoy someone of you... sorry ;-) (but I do not think that helps in the freezing Problem... only by the Connectionproblems)
 
hmm might be Interesting too that all machines was migrated from VMWARESERVER2 and are qcowimages.

I'm realy in trouble with this. May someone help?
Hi,
perhaps you should try to convert the disks of a VM to raw. One time i got also one strange issue with one VM, which are gone after using raw.
Do you look also on IO-WAIT during this happens? How fast is your storage?

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!