Proxmox flodding logs

albertodega

Active Member
Oct 9, 2015
5
0
41
Hi all.
Since this morning an instance of proxmox 3.1-21 is flodding log files(kern, syslog and messages) with a repetition of the following:
Code:
Aug 26 11:23:43 pve1 kernel: ------------[ cut here ]------------
Aug 26 11:23:43 pve1 kernel: WARNING: at lib/list_debug.c:26 __list_add+0x6c/0x90() (Tainted: G        W  ---------------   )
Aug 26 11:23:43 pve1 kernel: Hardware name: ProLiant DL320e Gen8 v2
Aug 26 11:23:43 pve1 kernel: list_add corruption. next->prev should be prev (ffff880049a9fa00), but was ffff8806845532b8. (next=ffff88071e353278).
Aug 26 11:23:43 pve1 kernel: Modules linked in: vzethdev vznetdev pio_nfs pio_direct pfmt_raw pfmt_ploop1 ploop simfs vzrst vzcpt vzdquota vzmon vzdev ip6t_REJECT ip6table_mangle ip6table_filter ip6_tables xt_length xt_hl xt_tcpmss xt_TCPMSS xt_multiport xt_dscp vhost_net tun macvtap ipt_REJECT macvlan kvm_intel kvm fuse vzevent ib_iser rdma_cm ib_addr iw_cm ib_cm ib_sa ib_mad ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi nfsd nfs nfs_acl auth_rpcgss fscache lockd sunrpc ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack iptable_mangle ipt_LOG xt_limit iptable_filter ip_tables mii bonding 8021q garp ipv6 snd_pcsp snd_pcm hpilo snd_page_alloc snd_timer power_meter snd soundcore iTCO_wdt serio_raw hpwdt iTCO_vendor_support shpchp ext3 mbcache jbd raid1 sg ahci xhci_hcd tg3 [last unloaded: scsi_wait_scan]
Aug 26 11:23:43 pve1 kernel: Pid: 5621, comm: kvm veid: 0 Tainted: G        W  ---------------    2.6.32-26-pve #1
Aug 26 11:23:43 pve1 kernel: Call Trace:
Aug 26 11:23:43 pve1 kernel: [<ffffffff8106f757>] ? warn_slowpath_common+0x87/0xe0
Aug 26 11:23:43 pve1 kernel: [<ffffffff8106f866>] ? warn_slowpath_fmt+0x46/0x50
Aug 26 11:23:43 pve1 kernel: [<ffffffff8128d69c>] ? __list_add+0x6c/0x90
Aug 26 11:23:43 pve1 kernel: [<ffffffffa04ad29c>] ? vmx_vcpu_load+0xdc/0x190 [kvm_intel]
Aug 26 11:23:43 pve1 kernel: [<ffffffffa04a9e66>] ? __vmx_load_host_state+0xe6/0x100 [kvm_intel]
Aug 26 11:23:43 pve1 kernel: [<ffffffffa0454389>] ? kvm_arch_vcpu_load+0x29/0x160 [kvm]
Aug 26 11:23:43 pve1 kernel: [<ffffffffa04460c4>] ? vcpu_load+0x54/0x80 [kvm]
Aug 26 11:23:43 pve1 kernel: [<ffffffffa0447a6b>] ? kvm_vcpu_block+0x7b/0xc0 [kvm]
Aug 26 11:23:43 pve1 kernel: [<ffffffff8109b440>] ? autoremove_wake_function+0x0/0x40
Aug 26 11:23:43 pve1 kernel: [<ffffffffa045af5c>] ? kvm_arch_vcpu_ioctl_run+0x47c/0x1050 [kvm]
Aug 26 11:23:43 pve1 kernel: [<ffffffffa04418e3>] ? kvm_vcpu_ioctl+0x2e3/0x580 [kvm]
Aug 26 11:23:43 pve1 kernel: [<ffffffff810bc809>] ? do_futex+0x159/0xb10
Aug 26 11:23:43 pve1 kernel: [<ffffffff811b45fa>] ? vfs_ioctl+0x2a/0xa0
Aug 26 11:23:43 pve1 kernel: [<ffffffff811b4c2e>] ? do_vfs_ioctl+0x7e/0x570
Aug 26 11:23:43 pve1 kernel: [<ffffffffa044b744>] ? kvm_on_user_return+0x74/0x80 [kvm]
Aug 26 11:23:43 pve1 kernel: [<ffffffff811b516f>] ? sys_ioctl+0x4f/0x80
Aug 26 11:23:43 pve1 kernel: [<ffffffff8100b182>] ? system_call_fastpath+0x16/0x1b
Aug 26 11:23:43 pve1 kernel: ---[ end trace 377c11f7dbdb20fd ]---
we have momentarily stopped rsyslog because the flodd was filling up / partition and the node seems now stable, but for sure the message above is still running.

any clues about what's happening?

thank you in advance
alberto