Kernel Panic

rahs

New Member
Aug 9, 2012
11
0
1
IBM BladeCenter HS22 -[7870H2G]-

After kernel version pve-kernel-2.6.32-13-pve kernel panic after any network activity (e.q. upload iso image)

pve-kernel-2.6.32-11-pve - works fine
pve-kernel-2.6.32-12-pve - works fine
pve-kernel-2.6.32-13-pve - works fine
pve-kernel-2.6.32-14-pve - kernel panic
pve-kernel-2.6.32-16-pve - kernel panic
pve-kernel-2.6.32-17-pve - kernel panic

Code:
pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-13-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-13-pve: 2.6.32-72
pve-kernel-2.6.32-12-pve: 2.6.32-68
pve-kernel-2.6.32-14-pve: 2.6.32-74
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1

Code:
lspci
00:00.0 Host bridge: Intel Corporation 5520 I/O Hub to ESI Port (rev 22)
00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 (rev 22)
00:03.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 3 (rev 22)
00:05.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 5 (rev 22)
00:07.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 (rev 22)
00:08.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 8 (rev 22)
00:09.0 PCI bridge: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 9 (rev 22)
00:10.0 PIC: Intel Corporation 7500/5520/5500/X58 Physical and Link Layer Registers Port 0 (rev 22)
00:10.1 PIC: Intel Corporation 7500/5520/5500/X58 Routing and Protocol Layer Registers Port 0 (rev 22)
00:11.0 PIC: Intel Corporation 7500/5520/5500 Physical and Link Layer Registers Port 1 (rev 22)
00:11.1 PIC: Intel Corporation 7500/5520/5500 Routing & Protocol Layer Register Port 1 (rev 22)
00:14.0 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub System Management Registers (rev 22)
00:14.1 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers (rev 22)
00:14.2 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub Control Status and RAS Registers (rev 22)
00:14.3 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub Throttle Registers (rev 22)
00:15.0 PIC: Intel Corporation 7500/5520/5500/X58 Trusted Execution Technology Registers (rev 22)
00:16.0 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.1 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.2 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.3 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.4 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.5 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.6 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.7 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:1a.0 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #4
00:1a.7 USB controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #2
00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 1
00:1c.4 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 5
00:1d.0 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1
00:1d.1 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2
00:1d.2 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3
00:1d.7 USB controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90)
00:1f.0 ISA bridge: Intel Corporation 82801JIB (ICH10) LPC Interface Controller
00:1f.3 SMBus: Intel Corporation 82801JI (ICH10 Family) SMBus Controller
06:00.0 PCI bridge: Vitesse Semiconductor VSC452 [SuperBMC] (rev 01)
07:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200EV
0b:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 10)
10:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709S Gigabit Ethernet (rev 20)
10:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709S Gigabit Ethernet (rev 20)
15:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711 10-Gigabit PCIe
15:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711 10-Gigabit PCIe
24:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03)
24:00.1 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03)
 

Attachments

  • kernel_panic_16.txt
    9.7 KB · Views: 30
  • kernel_panic_17.txt
    9.5 KB · Views: 16
if your "lsmod|grep bnx" also displays bnx2x as the loaded module, then welcome to the club :/

https://forum.proxmox.com/threads/11317-Kernel-Panic-on-Blade-BL460cG6?p=67919

Whats great about your post is that you managed to grab the complete kernel panic text. something I have yet to accomplish (damn HP ilo consoles not letting you scroll back once the offending machine crashes)


EDITED TO ADD: Im no kernel expert by any stretch of the imagination, but I noticed that both your logs say "last unloaded scsi_wait_scan"
 
Last edited:
as requested in the DRBD thread linked above this post I am bumping this topic.

To sum it up: the bnx2x drivers shipped with proxmox cause random kernel panics whenever network traffic occurs. A fix that was suggested in the DRBD thread was to compile a much more recent driver version.

Somebody of the staff had said in another thread that there were issues with newer versions of the driver as they tested them - what issues where those? I would like to specifically test for them with the new drivers linked in the DRBD thread.
 
as requested in the DRBD thread linked above this post I am bumping this topic.

To sum it up: the bnx2x drivers shipped with proxmox cause random kernel panics whenever network traffic occurs. A fix that was suggested in the DRBD thread was to compile a much more recent driver version.

Somebody of the staff had said in another thread that there were issues with newer versions of the driver as they tested them - what issues where those? I would like to specifically test for them with the new drivers linked in the DRBD thread.

Just to add, I have been running the latest version for over a month now with no issues.
 
We have no such NIC here in our lab so we cannot test and debug it here.

Also no customer have such a system (with this problem) so we had no change to dig deeper here.
 
Hi Guys,

Not sure it's related (it's more a performance problem ) but I have found a bug on redhat bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=773675

Disable LRO for all NICs that have LRO enabled
"A serious performance problem occurs when running using a bond and a bridge on top of NICs that use LRO. LRO should get disabled automatically when the NIC is added to a bridge but this doesn't work right when there is a bond in between. This patch disables LRO on all nics.+Previously, when an NIC was added to a bridge LRO would not automatically be disabled if a bond was also present. The use of a bond and a bridge on top of NICs using LRO presented serious performance degradation. This update ensures that LRO is disabled on all NICs, avoiding this performance degradation issue."

Code:
echo "options bnx2x disable_tpa=1" > /etc/modprobe.d/bnx2x.conf
echo "options mlx4_en num_lro=0" > /etc/modprobe.d/mlx4_en.conf
echo "options enic lro_disable=1" > /etc/modprobe.d/enic.conf
echo "options s2io lro=0" > /etc/modprobe.d/s2io.conf
then reboot

Does it help ?
 
To sum it up: the bnx2x drivers shipped with proxmox cause random kernel panics whenever network traffic occurs. A fix that was suggested in the DRBD thread was to compile a much more recent driver version.

I can also confirm the proxmox Bug with bnx2x hardware.
Bought a new server hardware, installed proxmox and it crashes twice a day
 
Post details about your hardware.

And upgrade the driver and report if the issue is fixed with the new bnx2x driver.
 
Hello,

I can also confirm the proxmox Bug with bnx2x hardware.
Bought a new server hardware, installed proxmox and it crashes twice a day

Same problem here.
New hardware, installed Proxmox 2.2 from latest ISO
Trying "apt-get upgrade" crashes the server os (reproducible)

After rebooting network does not work at all (with driver loaded)
Have to ifdown/ifup eth<x> from console to activate network

The hint from spirit (posting #12) regarding LRO has no effect at my system.

Hardware: HP C7000 with BL460C Gen8 servers

network driver bnx2x, version: 1.72.00-0 (quiet old)
firmware-version: bc 7.0.49
bus-info: 0000:04:00.2


With kind regards

Jo_
 
Last edited:
Hi,

i use a Supermicro Motherboard X9DRW-7/iTPF with 10GBE onboard
lspci says:
05:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 11)
05:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 11)
Today I compiled and installed the current Broadcom Drivers.
I now have the following versions:
root@san05:~# ethtool -i eth0
driver: bnx2x
version: 1.72.00-0
firmware-version: bc 7.4.19
bus-info: 0000:05:00.0

Now the System survives high network traffic for 5 minutes. With the older driver the system died after about 2 minutes.

I also have warnings in the kernel log before the system crashes:
------------[ cut here ]------------
WARNING: at kernel/bc/net.c:315 ub_sock_tcp_chargerecv+0xd8/0x220() (Tainted: G W --------------- )
Hardware name: X9DRW-7/iTPF
Modules linked in: vzethdev vznetdev pio_nfs pio_direct pfmt_raw pfmt_ploop1 ploop simfs vzrst nf_nat vzcpt vzdquota vzmon vzdev ip6t_REJECT ip6table_mangle ip6table_filter ip6_tables xt_length xt_hl xt_tcpmss xt_TCPMSS iptable_mangle xt_limit vhost_net macvtap xt_dscp macvlan tun ipt_REJECT kvm_intel kvm dlm configfs xt_multiport xt_pkttype nf_conntrack_ftp nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack iptable_filter ip_tables vzevent ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi fuse scsi_transport_iscsi nfsd nfs lockd fscache nfs_acl auth_rpcgss sunrpc bonding ipv6 8021q garp snd_pcsp snd_pcm snd_timer tpm_tis tpm tpm_bios snd soundcore snd_page_alloc sb_edac i2c_i801 edac_core i2c_core ioatdma acpi_pad wmi shpchp ext3 jbd mbcache sg igb dca isci libsas scsi_transport_sas ahci bnx2x megaraid_sas mdio [last unloaded: scsi_wait_scan]
Pid: 6014, comm: dd veid: 0 Tainted: G W --------------- 2.6.32-17-pve #1
Call Trace:
<IRQ> [<ffffffff8106cf88>] ? warn_slowpath_common+0x88/0xc0
[<ffffffff8106cfda>] ? warn_slowpath_null+0x1a/0x20
[<ffffffff810afa58>] ? ub_sock_tcp_chargerecv+0xd8/0x220
[<ffffffff810afc50>] ? ub_sockrcvbuf_charge+0xb0/0xc0
[<ffffffff814abefc>] ? tcp_try_rmem_schedule+0x5c/0x380
[<ffffffff814ac3c6>] ? tcp_data_queue+0x1a6/0xc80
[<ffffffffa0589205>] ? ipt_do_table+0x2c5/0x6b0 [ip_tables]
[<ffffffff814ab9cb>] ? tcp_validate_incoming+0x30b/0x3a0
[<ffffffff814af6f9>] ? tcp_rcv_established+0x3b9/0x890
[<ffffffff814b7edb>] ? tcp_v4_do_rcv+0x31b/0x470
[<ffffffffa05c45ca>] ? ipv4_confirm+0x8a/0x1e0 [nf_conntrack_ipv4]
[<ffffffff814ba60e>] ? tcp_v4_rcv+0x50e/0x8f0
[<ffffffff81495f30>] ? ip_local_deliver_finish+0x0/0x310
[<ffffffff8149604c>] ? ip_local_deliver_finish+0x11c/0x310
[<ffffffff814962d0>] ? ip_local_deliver+0x90/0xa0
[<ffffffff8149573d>] ? ip_rcv_finish+0x12d/0x440
[<ffffffff81495cd4>] ? ip_rcv+0x284/0x370
[<ffffffff8145c8db>] ? __netif_receive_skb+0x33b/0x750
[<ffffffff814b69ba>] ? tcp4_gro_receive+0x5a/0xe0
[<ffffffff8145ef18>] ? netif_receive_skb+0x58/0x60
[<ffffffff8145f030>] ? napi_skb_finish+0x50/0x70
[<ffffffff81461729>] ? napi_gro_receive+0x39/0x50
[<ffffffffa00579c6>] ? bnx2x_rx_int+0x9b6/0x17d0 [bnx2x]
[<ffffffff8145183e>] ? __kfree_skb+0x1e/0xa0
[<ffffffff814518fb>] ? consume_skb+0x3b/0x80
[<ffffffff8145d345>] ? dev_kfree_skb_any+0x45/0x50
[<ffffffffa00532bd>] ? bnx2x_free_tx_pkt+0x19d/0x2a0 [bnx2x]
[<ffffffffa00532bd>] ? bnx2x_free_tx_pkt+0x19d/0x2a0 [bnx2x]
[<ffffffffa005006a>] ? bnx2x_8726_config_init+0x12a/0x3c0 [bnx2x]
[<ffffffffa005889c>] ? bnx2x_poll+0xbc/0x300 [bnx2x]
[<ffffffff81461843>] ? net_rx_action+0x103/0x2e0
[<ffffffff81075dc3>] ? __do_softirq+0x103/0x260
[<ffffffff81080095>] ? get_next_timer_interrupt+0x1b5/0x260
[<ffffffff8100c30c>] ? call_softirq+0x1c/0x30
[<ffffffff8100df35>] ? do_softirq+0x65/0xa0
[<ffffffff81075bed>] ? irq_exit+0xcd/0xd0
[<ffffffff81530595>] ? do_IRQ+0x75/0xf0
[<ffffffff8100bb13>] ? ret_from_intr+0x0/0x11
<EOI> [<ffffffff81138033>] ? __alloc_pages_nodemask+0x1e3/0xb80
[<ffffffff811251e1>] ? file_read_iter_actor+0x61/0x80
[<ffffffff8100bb0e>] ? common_interrupt+0xe/0x13
[<ffffffff81177643>] ? policy_nodemask+0x13/0x50
[<ffffffff8117a44a>] ? alloc_pages_current+0xaa/0x120
[<ffffffff811a0edc>] ? pipe_write+0x35c/0x640
[<ffffffff8119648a>] ? do_sync_write+0xfa/0x140
[<ffffffff81096b50>] ? autoremove_wake_function+0x0/0x40
[<ffffffff8100bb0e>] ? common_interrupt+0xe/0x13
[<ffffffff81196768>] ? vfs_write+0xb8/0x1a0
[<ffffffff81198155>] ? fget_light+0x45/0xa0
[<ffffffff81197181>] ? sys_write+0x51/0x90
[<ffffffff8100b182>] ? system_call_fastpath+0x16/0x1b
---[ end trace 8766522261194a0b ]---

So the problem seems not to be solved.
Do you have the same driver versions that I have ?
 
Hi,

i use a Supermicro Motherboard X9DRW-7/iTPF with 10GBE onboard
lspci says:

Today I compiled and installed the current Broadcom Drivers.
I now have the following versions:


Now the System survives high network traffic for 5 minutes. With the older driver the system died after about 2 minutes.

I also have warnings in the kernel log before the system crashes:


So the problem seems not to be solved.
Do you have the same driver versions that I have ?

The current version is 1.72.00. You did something wrong. 1.72.18 is what you want.

Last login: Fri Jan 18 13:05:10 2013 from 10.80.8.112
root@fiosprox1:~# ethtool -i eth0
driver: bnx2x
version: 1.72.18
firmware-version: bc 6.2.15 phy 4.f
bus-info: 0000:20:00.0
 
I installed the driver from http://forum.proxmox.com/threads/12064-DRBD-Assistance?p=66039#post66039

then did aptitude dist-upgrade and it did not crash! (which it did before - reliably)

There was one caveat I had to overcome, that others may experience too:

As I said before, the kernel would reliably panic whenever I tried to install anything with apt. Oddly enough, wget'ing the VERY SAME PACKAGE works just fine, so I had to get build-essential and the headers without downloading through apt. heres how I did that:

Code:
apt-get -y --print-uris --no-download install build-essential|egrep -o -e "http.*\.deb'"|sed -r "s/'//g"|xargs wget
wget ftp://download.proxmox.com/debian/dists/squeeze/pve/binary-amd64/pve-headers-2.6.32-17-pve_2.6.32-83_amd64.deb
dpkg -i ./*.deb

Since the aforementioned dist-upgrade finished without issues I will consider this driver working fine for now. Will report back in a week or if anything bad happens
 
The current version is 1.72.00. You did something wrong. 1.72.18 is what you want.

Last login: Fri Jan 18 13:05:10 2013 from 10.80.8.112
root@fiosprox1:~# ethtool -i eth0
driver: bnx2x
version: 1.72.18
firmware-version: bc 6.2.15 phy 4.f
bus-info: 0000:20:00.0

You were right, something must have been gone wrong.

I now installed the current January Driver from: http://www.broadcom.com/support/ethernet_nic/netxtremeii10.php

It identifies as:
driver: bnx2x
version: 1.74.22
firmware-version: bc 7.4.19
bus-info: 0000:05:00.0

And the system seems to be stable now.
 
Thanks for you reply

This options does not solve the problem with kernel panic
May be useful later
 
I just upgraded to pvetest

Appears to be stable, the kernel panic is not repeated, but the network load in guests OS, syslog the host system clogged messages

Feb 7 14:36:59 dc-bs1-05 kernel: ------------[ cut here ]------------
Feb 7 14:36:59 dc-bs1-05 kernel: WARNING: at net/core/dev.c:1700 skb_gso_segment+0x1e2/0x2c0() (Tainted: G W --------------- )
Feb 7 14:36:59 dc-bs1-05 kernel: Hardware name: BladeCenter HS22 -[7870H2G]-
Feb 7 14:36:59 dc-bs1-05 kernel: tun: caps=(0x80000000, 0x0) len=5844 data_len=4344 ip_summed=1
Feb 7 14:36:59 dc-bs1-05 kernel: Modules linked in: vzethdev vznetdev pio_nfs pio_direct pfmt_raw pfmt_ploop1 ploop simfs vzrst nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 vzcpt nfs lockd fscache nfs_acl
auth_rpcgss sunrpc nf_conntrack vzdquota vzmon vzdev ip6t_REJECT ip6table_mangle ip6table_filter ip6_tables xt_length xt_hl xt_tcpmss xt_TCPMSS iptable_mangle iptable_filter xt_multiport xt_limit xt_ds
cp vhost_net macvtap macvlan tun ipt_REJECT kvm_intel ip_tables kvm dlm configfs vzevent fuse bonding ipv6 8021q garp snd_pcsp cdc_ether i2c_i801 snd_pcm snd_timer snd usbnet mii soundcore i2c_core i7c
ore_edac ioatdma serio_raw dca tpm_tis edac_core snd_page_alloc shpchp tpm tpm_bios ext3 jbd mbcache dm_round_robin dm_multipath mptsas lpfc mptscsih scsi_transport_fc mptbase scsi_tgt bnx2x mdio bnx2
scsi_transport_sas [last unloaded: scsi_wait_scan]
Feb 7 14:36:59 dc-bs1-05 kernel: Pid: 2218, comm: rsyslogd veid: 0 Tainted: G W --------------- 2.6.32-18-pve #1
Feb 7 14:36:59 dc-bs1-05 kernel: Call Trace:
Feb 7 14:36:59 dc-bs1-05 kernel: <IRQ> [<ffffffff8106d228>] ? warn_slowpath_common+0x88/0xc0
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff8106d316>] ? warn_slowpath_fmt+0x46/0x50
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff814ee379>] ? br_flood+0xb9/0xe0
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff81453912>] ? skb_gso_segment+0x1e2/0x2c0
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff81447e92>] ? kfree_skb+0x42/0x90
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff8145823e>] ? dev_hard_start_xmit+0x19e/0x510
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff814532ee>] ? __netif_receive_skb+0x45e/0x750
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff8147314a>] ? sch_direct_xmit+0x15a/0x1d0
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff81458ac8>] ? dev_queue_xmit+0x518/0x730
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff814ee440>] ? br_dev_queue_push_xmit+0x60/0xc0
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff814ee4f8>] ? br_forward_finish+0x58/0x60
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff814ee59a>] ? __br_forward+0x9a/0xc0
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff814ee625>] ? br_forward+0x65/0x70
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff814ef5e1>] ? br_handle_frame_finish+0x221/0x300
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff814ef882>] ? br_handle_frame+0x1c2/0x270
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff814532ee>] ? __netif_receive_skb+0x45e/0x750
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff8145367a>] ? process_backlog+0x9a/0x100
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff81457aa3>] ? net_rx_action+0x103/0x2e0
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff81076063>] ? __do_softirq+0x103/0x260
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff8100c2ac>] ? call_softirq+0x1c/0x30
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff8100def5>] ? do_softirq+0x65/0xa0
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff81075e8d>] ? irq_exit+0xcd/0xd0
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff81524935>] ? do_IRQ+0x75/0xf0
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff8100ba93>] ? ret_from_intr+0x0/0x11
Feb 7 14:36:59 dc-bs1-05 kernel: <EOI> [<ffffffff8106f0d0>] ? do_syslog+0x200/0x680
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff8106f0f3>] ? do_syslog+0x223/0x680
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff81096bc0>] ? autoremove_wake_function+0x0/0x40
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff8120d912>] ? kmsg_read+0x32/0x60
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff81201b7e>] ? proc_reg_read+0x7e/0xc0
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff811986a5>] ? vfs_read+0xb5/0x1a0
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff811987e1>] ? sys_read+0x51/0x90
Feb 7 14:36:59 dc-bs1-05 kernel: [<ffffffff8100b102>] ? system_call_fastpath+0x16/0x1b
Feb 7 14:36:59 dc-bs1-05 kernel: ---[ end trace c7e0d04e5163ea6e ]---

10 minutes syslog up to 400 MB!!!

# pveversion -v
pve-manager: 2.3-7 (pve-manager/2.3/1fe64d18)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-18-pve: 2.6.32-88
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-8
pve-firmware: 1.0-21
libpve-common-perl: 1.0-44
libpve-access-control: 1.0-25
libpve-storage-perl: 2.3-2
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: not correctly installed
vzquota: 3.1-1
pve-qemu-kvm: 1.3-18
ksm-control-daemon: 1.1-1

# ethtool -i eth0
driver: bnx2x
version: 1.74.22
firmware-version: bc 6.2.22
bus-info: 0000:15:00.0

The load on the network directly on the host system does not cause problems!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!