KVM crash on i7-870

MasterTH

Renowned Member
Jun 12, 2009
224
7
83
www.sonog.de
Hi there,

installed a fresh pve1.7 on i7-870, 16GB DDR3-1333, Adaptec2405 with RAID1 (no write-cache enabled)

Maschine is running very fast and everything is fine, i thought. But then this:

Jan 4 19:03:28 root1 kernel: ------------[ cut here ]------------
Jan 4 19:03:28 root1 kernel: kernel BUG at arch/x86/kvm/mmu.c:641!
Jan 4 19:03:28 root1 kernel: invalid opcode: 0000 [#1] SMP
Jan 4 19:03:28 root1 kernel: last sysfs file: /sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/target0:1:0/0:1:0:0/scsi_generic/sg1/dev
Jan 4 19:03:28 root1 kernel: CPU 3
Jan 4 19:03:28 root1 kernel: Modules linked in: tun kvm_intel kvm vzethdev vznetdev simfs vzrst vzcpt vzdquota vzmon vzdev xt_tcpudp xt_length xt_hl xt_tcpmss xt_TCPMSS iptable_mangle iptable_filter xt_multiport xt_limit xt_dscp ipt_REJECT ip_tables x_tables ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi bridge stp snd_hda_codec_via usbhid snd_hda_intel hid snd_hda_codec i2c_i801 parport_pc parport i2c_core snd_hwdep snd_pcm serio_raw snd_timer evdev button snd soundcore snd_page_alloc psmouse processor pcspkr ext3 jbd mbcache dm_mirror dm_region_hash dm_log dm_snapshot ata_generic ata_piix firewire_ohci aacraid firewire_core crc_itu_t ehci_hcd r8169 mii libata usbcore nls_base thermal fan thermal_sys [last unloaded: scsi_wait_scan]
Jan 4 19:03:28 root1 kernel: Pid: 28273, comm: kvm Not tainted 2.6.32-4-pve #1 dzhanibekov To Be Filled By O.E.M.
Jan 4 19:03:28 root1 kernel: RIP: 0010:[<ffffffffa03411e9>] [<ffffffffa03411e9>] rmap_remove+0xe2/0x193 [kvm]
Jan 4 19:03:28 root1 kernel: RSP: 0018:ffff8803ad157a78 EFLAGS: 00010292
Jan 4 19:03:28 root1 kernel: RAX: 0000000000000034 RBX: 0000000000004000 RCX: 00000000000010a3
Jan 4 19:03:28 root1 kernel: RDX: 0000000000000000 RSI: 0000000000000096 RDI: 0000000000000246
Jan 4 19:03:28 root1 kernel: RBP: ffff8802d7cb1bc0 R08: 0000000000009dcc R09: 000000000000000a
Jan 4 19:03:28 root1 kernel: R10: 0000000000000004 R11: 0000000000000000 R12: ffff880316da1790
Jan 4 19:03:28 root1 kernel: R13: ffff880396078000 R14: ffffea0000000000 R15: 000ffffffffff000
Jan 4 19:03:28 root1 kernel: FS: 00007f3544147730(0000) GS:ffff880013ec0000(0000) knlGS:0000000000000000
Jan 4 19:03:28 root1 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
Jan 4 19:03:28 root1 kernel: CR2: 00007fbd8dcd1170 CR3: 00000003b8b39000 CR4: 00000000000026e0
Jan 4 19:03:28 root1 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Jan 4 19:03:28 root1 kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Jan 4 19:03:28 root1 kernel: Process kvm (pid: 28273, veid=0, threadinfo ffff8803ad156000, task ffff88041a48f800)
Jan 4 19:03:28 root1 kernel: Stack:
Jan 4 19:03:28 root1 kernel: ffff880316da1790 ffff8802d7cb1bc0 ffff880316da1790 ffff880396078000
Jan 4 19:03:28 root1 kernel: <0> 0000000000000178 ffffffffa034187c 00000000ad157b58 ffff880396078000
Jan 4 19:03:28 root1 kernel: <0> ffff880316da1840 ffff88039607b878 ffff880396078000 ffff8803ad157de8
Jan 4 19:03:28 root1 kernel: Call Trace:
Jan 4 19:03:28 root1 kernel: [<ffffffffa034187c>] ? kvm_mmu_zap_page+0xa8/0x2f2 [kvm]
Jan 4 19:03:28 root1 kernel: [<ffffffffa0341f08>] ? kvm_mmu_zap_all+0x27/0x4d [kvm]
Jan 4 19:03:28 root1 kernel: [<ffffffffa03373eb>] ? kvm_arch_flush_shadow+0x9/0x12 [kvm]
Jan 4 19:03:28 root1 kernel: [<ffffffffa033021c>] ? __kvm_set_memory_region+0x37f/0x4e3 [kvm]
Jan 4 19:03:28 root1 kernel: [<ffffffffa033a343>] ? kvm_arch_vm_ioctl+0x6e8/0xa8a [kvm]
Jan 4 19:03:28 root1 kernel: [<ffffffffa03303ac>] ? kvm_set_memory_region+0x2c/0x42 [kvm]
Jan 4 19:03:28 root1 kernel: [<ffffffffa0330a62>] ? kvm_vm_ioctl+0x2b7/0xd20 [kvm]
Jan 4 19:03:28 root1 kernel: [<ffffffff810e7853>] ? virt_to_head_page+0x9/0x2a
Jan 4 19:03:28 root1 kernel: [<ffffffffa033a878>] ? msr_io+0x102/0x113 [kvm]
Jan 4 19:03:28 root1 kernel: [<ffffffffa03335c2>] ? do_set_msr+0x0/0x14 [kvm]
Jan 4 19:03:28 root1 kernel: [<ffffffffa033b200>] ? kvm_arch_vcpu_ioctl+0x977/0x987 [kvm]
Jan 4 19:03:28 root1 kernel: [<ffffffff8107be6f>] ? do_futex+0x9d/0x989
Jan 4 19:03:28 root1 kernel: [<ffffffffa032edb3>] ? kvm_vcpu_ioctl+0x4d3/0x4e6 [kvm]
Jan 4 19:03:28 root1 kernel: [<ffffffff8105daae>] ? do_send_sig_info+0x5b/0x6a
Jan 4 19:03:28 root1 kernel: [<ffffffff810fd25a>] ? vfs_ioctl+0x21/0x6c
Jan 4 19:03:28 root1 kernel: [<ffffffff810fd7a8>] ? do_vfs_ioctl+0x48d/0x4cb
Jan 4 19:03:28 root1 kernel: [<ffffffff8107c86e>] ? sys_futex+0x113/0x131
Jan 4 19:03:28 root1 kernel: [<ffffffff810f1f30>] ? vfs_read+0xca/0xff
Jan 4 19:03:28 root1 kernel: [<ffffffff810fd823>] ? sys_ioctl+0x3d/0x5c
Jan 4 19:03:28 root1 kernel: [<ffffffff81010c12>] ? system_call_fastpath+0x16/0x1b
Jan 4 19:03:28 root1 kernel: Code: d7 3e 35 a0 e8 74 1e fd e0 0f 0b eb fe a8 01 75 2a 48 39 c5 74 19 48 8b 55 00 48 89 ee 48 c7 c7 f7 3e 35 a0 31 c0 e8 52 1e fd e0 <0f> 0b eb fe 49 c7 01 00 00 00 00 e9 99 00 00 00 48 89 c7 45 31
Jan 4 19:03:28 root1 kernel: RIP [<ffffffffa03411e9>] rmap_remove+0xe2/0x193 [kvm]
Jan 4 19:03:28 root1 kernel: RSP <ffff8803ad157a78>
Jan 4 19:03:28 root1 kernel: ---[ end trace 97b51adff49ca3d8 ]---



Whats happening here? Now i can't start my kvm maschines, i think i've to reboot the server. Or has it to do with HT, i gave my maschine 2 Cores, will try with only one.
 
sure, arghh....
sry, i forgot it

Code:
pveversion -v
pve-manager: 1.7-10 (pve-manager/1.7/5323)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.7-30
pve-kernel-2.6.32-4-pve: 2.6.32-30
qemu-server: 1.1-28
pve-firmware: 1.0-10
libpve-storage-perl: 1.0-16
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-10
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.13.0-3
ksm-control-daemon: 1.0-4
 
sure, arghh....
sry, i forgot it

Code:
pveversion -v
pve-manager: 1.7-10 (pve-manager/1.7/5323)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.7-30
pve-kernel-2.6.32-4-pve: 2.6.32-30
qemu-server: 1.1-28
pve-firmware: 1.0-10
libpve-storage-perl: 1.0-16
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-10
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.13.0-3
ksm-control-daemon: 1.0-4

Hi,
you don't wrote if you use OpenVZ. In this case you should try the 2.6.18-kernel.

Udo
 
yes i use kvm and openvz.


how can i switch to the other kernel?
Hi,
Code:
apt-get install proxmox-ve-2.6.18
after that, change /boot/grub/menu.lst:
default to 1 (or the position, where the 2.6.18-entry is)
or move the 2.6.18-entry (4 lines) to the first position.

reboot - and than you should have the 2.6.18 running.

Udo
 
why do i have to switch to 2.6.18? had containers running with 2.6.32

just had the same issue with 2.6.18 and kernel panics
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!