System crash

Nemesiz

Renowned Member
Jan 16, 2009
727
56
93
Lithuania
My server was running 9 days till i got system crash. It was something with bond0 (no logs). After few days i got some bug but system was still running.

Bug log:
Apr 29 13:43:30 vm kernel: bonding: unable to update mode of bond0 because interface is up.
Apr 29 13:43:30 vm kernel: bonding: bond0: Setting MII monitoring interval to 100.
Apr 29 13:43:30 vm dhclient: Internet Software Consortium DHCP Client 2.0pl5
Apr 29 13:43:30 vm dhclient: Copyright 1995, 1996, 1997, 1998, 1999 The Internet Software Consortium.
Apr 29 13:43:30 vm dhclient: All rights reserved.
Apr 29 13:43:30 vm dhclient:
Apr 29 13:43:30 vm dhclient: Please contribute if you find this software useful.
Apr 29 13:43:30 vm dhclient: For info, please visit http://www.isc.org/dhcp-contrib.html
Apr 29 13:43:30 vm dhclient:
Apr 29 13:43:30 vm dhclient: venet0: unknown hardware address type 65535
Apr 29 13:43:31 vm dhclient: venet0: unknown hardware address type 65535
Apr 29 13:43:31 vm dhclient: Listening on LPF/bond0/x:x:x:x:x:x
Apr 29 13:43:31 vm dhclient: Sending on LPF/bond0/x:x:x:x:x:x
Apr 29 13:43:31 vm dhclient: Sending on Socket/fallback/fallback-net
Apr 29 13:43:31 vm kernel: ------------[ cut here ]------------
Apr 29 13:43:31 vm kernel: kernel BUG at kernel/workqueue.c:224!
Apr 29 13:43:31 vm kernel: invalid opcode: 0000 [1] PREEMPT SMP
Apr 29 13:43:31 vm kernel: CPU: 3
Apr 29 13:43:31 vm kernel: Modules linked in: bridge 8021q kvm_intel kvm vzethdev vznetdev simfs vzrst vzcpt tun vzdquota vzmon vzdev xt_length ipt_ttl xt_tcpmss xt_multiport xt_limit ipt_tos ipv6 xt_TCPMSS ipt_REJECT xt_tcpudp xt_state iptable_mangle iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack iptable_filter ip_tables x_tables bonding dm_snapshot dm_mirror snd_hda_intel snd_pcm snd_timer 8139cp snd_page_alloc snd_hwdep thermal button parport_pc snd 8139too mii sky2 evdev parport pcspkr intel_agp processor soundcore scsi_wait_scan dm_mod usbhid hid usb_storage libusual sd_mod sr_mod ide_disk ide_generic ide_cd cdrom ide_core shpchp pci_hotplug uhci_hcd ehci_hcd usbcore iTCO_wdt iTCO_vendor_support i2c_i801 i2c_core ahci pata_jmicron pata_acpi ata_generic libata scsi_mod ohci1394 ieee1394 isofs msdos fat
Apr 29 13:43:31 vm kernel: Pid: 3128, comm: bond0 Not tainted 2.6.24-2-pve #1 ovz005
Apr 29 13:43:31 vm kernel: RIP: 0010:[<ffffffff802534ec>] [<ffffffff802534ec>] queue_delayed_work_on+0xac/0xe0
Apr 29 13:43:31 vm kernel: RSP: 0000:ffff8101a4977e70 EFLAGS: 00010286
Apr 29 13:43:31 vm kernel: RAX: 0000000000000000 RBX: ffff8101a48b2aa8 RCX: 0000000000000019
Apr 29 13:43:31 vm kernel: RDX: 0000000000000000 RSI: ffff8101a65a0c40 RDI: ffff8101a65a0c40
Apr 29 13:43:31 vm kernel: RBP: ffff8101a48b2ac8 R08: 0000000000000000 R09: 0000000000000000
Apr 29 13:43:31 vm kernel: R10: 0000000000000000 R11: 00000000b7f82550 R12: 0000000000000019
Apr 29 13:43:31 vm kernel: R13: 00000000ffffffff R14: ffff8101a1180088 R15: 0000000000000000
Apr 29 13:43:31 vm kernel: FS: 0000000000000000(0000) GS:ffff8101a9002f80(0000) knlGS:0000000000000000
Apr 29 13:43:31 vm kernel: CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
Apr 29 13:43:31 vm kernel: CR2: 00000000d0803000 CR3: 000000012edb3000 CR4: 00000000000026e0
Apr 29 13:43:31 vm kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Apr 29 13:43:31 vm kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Apr 29 13:43:31 vm kernel: Process bond0 (pid: 3128, veid=0, threadinfo ffff8101a4976000, task ffff8101a143f1e0)
Apr 29 13:43:31 vm kernel: Stack: ffff8101a48b2700 ffff8101a48b2ab0 ffff8101a1180080 ffff8101a48b2aa8
Apr 29 13:43:31 vm kernel: ffffffff88334520 ffffffff80252a38 ffff8101a1180098 ffff8101a1180080
Apr 29 13:43:31 vm kernel: ffffffff80253610 ffff8101a1180088 0000000000000000 ffffffff802536d5
Apr 29 13:43:31 vm kernel: Call Trace:
Apr 29 13:43:31 vm kernel: [<ffffffff88334520>] :bonding:bond_mii_monitor+0x0/0xf0
Apr 29 13:43:31 vm kernel: [<ffffffff80252a38>] run_workqueue+0x88/0x120
Apr 29 13:43:31 vm kernel: [<ffffffff80253610>] worker_thread+0x0/0x130
Apr 29 13:43:31 vm kernel: [<ffffffff802536d5>] worker_thread+0xc5/0x130
Apr 29 13:43:31 vm kernel: [<ffffffff80257f70>] autoremove_wake_function+0x0/0x30
Apr 29 13:43:31 vm kernel: [<ffffffff80253610>] worker_thread+0x0/0x130
Apr 29 13:43:31 vm kernel: [<ffffffff80253610>] worker_thread+0x0/0x130
Apr 29 13:43:31 vm kernel: [<ffffffff80257b9b>] kthread+0x4b/0x80
Apr 29 13:43:31 vm kernel: [<ffffffff8020d338>] child_rip+0xa/0x12
Apr 29 13:43:31 vm kernel: [<ffffffff80220950>] lapic_next_event+0x0/0x10
Apr 29 13:43:31 vm kernel: [<ffffffff80257b50>] kthread+0x0/0x80
Apr 29 13:43:31 vm kernel: [<ffffffff8020d32e>] child_rip+0x0/0x12
Apr 29 13:43:31 vm kernel:
Apr 29 13:43:31 vm kernel:
Apr 29 13:43:31 vm kernel: Code: 0f 0b eb fe 0f 0b eb fe 0f 0b eb fe 44 89 ee 48 89 ef e8 cd
Apr 29 13:43:31 vm kernel: RIP [<ffffffff802534ec>] queue_delayed_work_on+0xac/0xe0
Apr 29 13:43:31 vm kernel: RSP <ffff8101a4977e70>
Apr 29 13:43:31 vm kernel: ---[ end trace d3d425b2837036c1 ]---
Apr 29 13:43:32 vm dhclient: DHCPDISCOVER on bond0 to 255.255.255.255 port 67 interval 6
Apr 29 13:43:32 vm dhclient: receive_packet failed on bond0: Network is down
Apr 29 13:43:42 vm kernel: bond0: no IPv6 routers present

After next few days i got another system crash (no logs). Every crash was coherent with network.

My system:

# pveversion -v
pve-manager: 1.1-3 (pve-manager/1.1/3718)
qemu-server: 1.0-10
pve-kernel: 2.6.24-5
pve-kvm: 83-1
pve-firmware: 1
vncterm: 0.9-1
vzctl: 3.0.23-1pve1
vzdump: 1.1-1
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1dso1

Network: eth0 and eth1 -> bond0 balance-rr


Maybe is there any new updates?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!