error kernel during file transfer

bibax

New Member
May 3, 2013
28
0
1
Hi,

I have an error when I upload a file on a guest VM.
I don't know if it's a great error but /var/log/messages become very large.
It seems to be a kernel crash... What do you think about it?
It's one of the message but I have hundred and hundred of this in /var/log/messages.

Jun 17 16:38:50 proxmox1 kernel: ------------[ cut here ]------------
Jun 17 16:38:50 proxmox1 kernel: WARNING: at net/core/dev.c:1711 skb_gso_segment+0x1e2/0x2c0() (Tainted: G W --------------- )
Jun 17 16:38:50 proxmox1 kernel: Hardware name: PowerEdge R720
Jun 17 16:38:50 proxmox1 kernel: tun: caps=(0x80000000, 0x0) len=8852 data_len=7352 ip_summed=1
Jun 17 16:38:50 proxmox1 kernel: Modules linked in: ipmi_si mpt2sas scsi_transport_sas raid_class mptctl mptbase vzethdev vznetdev pio_nfs pio_direct pfmt_raw pfmt_ploop1 ploop simfs vzrst nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 vzcpt nf_conntrack vzdquota vzmon vzdev ip6t_REJECT ip6table_mangle ip6table_filter ip6_tables xt_length xt_hl xt_tcpmss xt_TCPMSS iptable_mangle vhost_net iptable_filter xt_multiport xt_limit macvtap macvlan xt_dscp tun ipt_REJECT kvm_intel ip_tables kvm ipmi_devintf ipmi_msghandler dell_rbu dlm configfs vzevent ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi fuse nfsd nfs lockd fscache nfs_acl auth_rpcgss sunrpc bonding ipv6 8021q garp shpchp tpm_tis snd_pcsp snd_pcm snd_timer tpm sb_edac power_meter edac_core dcdbas snd soundcore snd_page_alloc tpm_bios wmi ext3 jbd mbcache sg ses enclosure lpfc scsi_transport_fc ahci bnx2x megaraid_sas mdio scsi_tgt tg3 [last unloaded: ipmi_si]
Jun 17 16:38:50 proxmox1 kernel: Pid: 11544, comm: apache2 veid: 0 Tainted: G W --------------- 2.6.32-19-pve #1
Jun 17 16:38:50 proxmox1 kernel: Call Trace:
Jun 17 16:38:50 proxmox1 kernel: <IRQ> [<ffffffff8106d718>] ? warn_slowpath_common+0x88/0xc0
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff8106d806>] ? warn_slowpath_fmt+0x46/0x50
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff81454e82>] ? skb_gso_segment+0x1e2/0x2c0
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff8105a532>] ? default_wake_function+0x12/0x20
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff814597de>] ? dev_hard_start_xmit+0x19e/0x510
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff8148cf5c>] ? ip_local_deliver_finish+0x11c/0x310
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff8147472a>] ? sch_direct_xmit+0x15a/0x1d0
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff8145a068>] ? dev_queue_xmit+0x518/0x730
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff814efa80>] ? br_dev_queue_push_xmit+0x60/0xc0
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff814efb38>] ? br_forward_finish+0x58/0x60
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff814efbda>] ? __br_forward+0x9a/0xc0
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff814efc65>] ? br_forward+0x65/0x70
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff814f0c21>] ? br_handle_frame_finish+0x221/0x300
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff814f0ec2>] ? br_handle_frame+0x1c2/0x270
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff8145485e>] ? __netif_receive_skb+0x45e/0x750
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff81456bf8>] ? netif_receive_skb+0x58/0x60
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff81456d10>] ? napi_skb_finish+0x50/0x70
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff81458f19>] ? napi_gro_receive+0x39/0x50
Jun 17 16:38:50 proxmox1 kernel: [<ffffffffa00974d7>] ? bnx2x_rx_int+0xbd7/0x16a0 [bnx2x]
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff8144935b>] ? consume_skb+0x3b/0x80
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff814551a5>] ? dev_kfree_skb_any+0x45/0x50
Jun 17 16:38:50 proxmox1 kernel: [<ffffffffa009389e>] ? bnx2x_msix_fp_int+0xde/0x180 [bnx2x]
Jun 17 16:38:50 proxmox1 kernel: [<ffffffffa009804b>] ? bnx2x_poll+0xab/0x300 [bnx2x]
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff81459033>] ? net_rx_action+0x103/0x2f0
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff810765c3>] ? __do_softirq+0x103/0x260
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff8100c2ac>] ? call_softirq+0x1c/0x30
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff8100def5>] ? do_softirq+0x65/0xa0
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff810763ed>] ? irq_exit+0xcd/0xd0
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff81526635>] ? do_IRQ+0x75/0xf0
Jun 17 16:38:50 proxmox1 kernel: [<ffffffff8100ba93>] ? ret_from_intr+0x0/0x11
Jun 17 16:38:50 proxmox1 kernel: <EOI>
Jun 17 16:38:50 proxmox1 kernel: ---[ end trace 3e01ac09af8c30d7 ]---

Thank you very much!
 
Hi,

This is my proxmox conf, I think all updates are good :
# pveversion -v
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-96
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-7
vncterm: 1.0-4
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-10
ksm-control-daemon: 1.1-1

And this is my guest :
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.3 (Santiago)

Hope that will help us.

Thank you
 
Hello dear proxmox users,

we also have the same problem on a Dell R720 with only one network cable plugged in the first port (10G) of the broadcom network card. I haven't seen exactly what impact does these crashes have on the network, but the system log is full of similar call stacks, so we are interested by any solution or workaround for this problem.

The versions and firmwares seems to be up-to-date (july 2013). This does not seems to be the "warn_slowpath" bug that makes the bnx2x driver completely unusable on some other kernels.

# ethtool -i eth0
driver: bnx2x
version: 1.74.22
firmware-version: FFV7.6.14 bc 7.6.56 phy 1.34
bus-info: 0000:01:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no

# pveversion -v
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1


Syslog :
Jul 19 16:03:39 hypervisor kernel: ------------[ cut here ]------------
Jul 19 16:03:39 hypervisor kernel: WARNING: at net/core/dev.c:1711 skb_gso_segment+0x220/0x310() (Tainted: G W --------------- )
Jul 19 16:03:39 hypervisor kernel: Hardware name: PowerEdge R720
Jul 19 16:03:39 hypervisor kernel: tun: caps=(0x80000000, 0x0) len=2960 data_len=1460 ip_summed=1
Jul 19 16:03:39 hypervisor kernel: Modules linked in: ipmi_si mpt2sas raid_class scsi_transport_sas mptctl mptbase vzethdev vznetdev pio_nfs pio_direct pfmt_raw pfmt_ploop1 ploop simfs vzrst nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 vzcpt nf_conntrack vzdquota vzmon vzdev ip6t_REJECT ip6table_mangle ip6table_filter ip6_tables xt_length xt_hl xt_tcpmss xt_TCPMSS iptable_mangle iptable_filter xt_multiport xt_limit xt_dscp vhost_net ipt_REJECT tun macvtap macvlan kvm_intel kvm ip_tables ipmi_devintf ipmi_msghandler dell_rbu dlm configfs fuse vzevent ib_iser rdma_cm ib_addr iw_cm ib_cm ib_sa ib_mad ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi nfsd nfs auth_rpcgss nfs_acl fscache lockd sunrpc ipv6 ext4 jbd2 usb_storage snd_pcsp snd_pcm shpchp snd_page_alloc snd_timer snd soundcore dcdbas wmi sb_edac edac_core power_meter iTCO_wdt iTCO_vendor_support ext3 mbcache jbd sg ahci bnx2x megaraid_sas mdio [last unloaded: ipmi_si]
Jul 19 16:03:39 hypervisor kernel: Pid: 3598, comm: rsyslogd veid: 0 Tainted: G W --------------- 2.6.32-20-pve #1
Jul 19 16:03:39 hypervisor kernel: Call Trace:
Jul 19 16:03:39 hypervisor kernel: <IRQ> [<ffffffff8106b3e7>] ? warn_slowpath_common+0x87/0xe0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8106b4f6>] ? warn_slowpath_fmt+0x46/0x50
Jul 19 16:03:39 hypervisor kernel: [<ffffffff814b4195>] ? inet_gso_segment+0x105/0x2a0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff814494a0>] ? skb_gso_segment+0x220/0x310
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8144bc42>] ? dev_hard_start_xmit+0x1b2/0x4f0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff81468e7a>] ? sch_direct_xmit+0x16a/0x1d0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8144c445>] ? dev_queue_xmit+0x4c5/0x6a0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff814e2b80>] ? br_dev_queue_push_xmit+0x60/0xc0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff814e2c38>] ? br_forward_finish+0x58/0x60
Jul 19 16:03:39 hypervisor kernel: [<ffffffff814e2f2b>] ? __br_forward+0x9b/0xc0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8147660c>] ? nf_hook_slow+0xac/0x120
Jul 19 16:03:39 hypervisor kernel: [<ffffffff814e3e20>] ? br_handle_frame_finish+0x0/0x2f0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff814e30bd>] ? br_forward+0x5d/0x70
Jul 19 16:03:39 hypervisor kernel: [<ffffffff814e4017>] ? br_handle_frame_finish+0x1f7/0x2f0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff814e42ba>] ? br_handle_frame+0x1aa/0x250
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8144c91c>] ? __netif_receive_skb+0x23c/0x6e0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8144cf08>] ? netif_receive_skb+0x58/0x60
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8144d4a3>] ? __napi_gro_receive+0xe3/0x130
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8144d020>] ? napi_skb_finish+0x50/0x70
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8144d578>] ? napi_gro_receive+0x38/0x50
Jul 19 16:03:39 hypervisor kernel: [<ffffffffa005c5a1>] ? bnx2x_rx_int+0xb21/0x16b0 [bnx2x]
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8143f64a>] ? consume_skb+0x3a/0x80
Jul 19 16:03:39 hypervisor kernel: [<ffffffffa005a998>] ? bnx2x_free_tx_pkt+0x1b8/0x2a0 [bnx2x]
Jul 19 16:03:39 hypervisor kernel: [<ffffffffa005d1d4>] ? bnx2x_poll+0xa4/0x2e0 [bnx2x]
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8144d879>] ? net_rx_action+0x199/0x3c0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff81074c7b>] ? __do_softirq+0x11b/0x260
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8100c32c>] ? call_softirq+0x1c/0x30
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8100de95>] ? do_softirq+0x75/0xb0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff81074f55>] ? irq_exit+0xc5/0xd0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff81523452>] ? do_IRQ+0x72/0xe0
Jul 19 16:03:39 hypervisor kernel: [<ffffffff8100bb13>] ? ret_from_intr+0x0/0x11
Jul 19 16:03:39 hypervisor kernel: <EOI>
Jul 19 16:03:39 hypervisor kernel: ---[ end trace 28466d7eac4ee135 ]---
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!