KVM internal-error

vzfanatic

Active Member
Jul 22, 2008
67
0
26
Hello All,

I have a problem i've spent the last two days trying to sort out. I have two identical servers, different locations. At one location i have working KVM's, no problems there.

at the new location i cannot get kvm's to run, either from a backup (from the working location) or, if i create a new kvm. creating a new kvm goes fine until i reboot or power cycle the kvm. in all instances the message i see in the proxmox web gui is 'internal-error' after a few seconds of starting. i've tried to start the kvm by disable 'kvm hardware virtualization' and when i deselect that the kvm will start, but the windows boot says that the volume is damaged, or, i see only a black screen.

so, whether i use a backup and restore and try to start, or, if i create a new kvm from scratch, on start i see always 'internal-error' and i can duplicate it every time.

here is my pveversion output
proxmox-ve-2.6.32: 3.4-150 (running kernel: 2.6.32-37-pve)
pve-manager: 3.4-3 (running version: 3.4-3/2fc72fee)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-37-pve: 2.6.32-150
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-16
qemu-server: 3.4-3
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-32
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

the server is 48gb ram and quadcore xeon 5520. it has also a lsi raid 5 hardware.

if there is any ideas about this i would appreciate it. i already worked it two days without any good results.
 
**BUMP**
Even Proxmox doesn't know about this? I really need to get it sorted.. anyone?
 
Hi,

I understand that internal-error can be triggered by bad hardware. If a VM running under KVM gets unexpectedly paused, take a look at /var/log/syslog on the host. If you see something like this, try replacing your RAM or CPU. In rare situations it could be your mainboard.

Oct 8 02:30:01 slash QEMU[3038]: KVM internal error. Suberror: 3
Oct 8 02:30:01 slash QEMU[3038]: extra data[0]: 0x0000000080000b0e
Oct 8 02:30:01 slash QEMU[3038]: extra data[1]: 0x0000000000000031
Oct 8 02:30:01 slash QEMU[3038]: extra data[2]: 0x0000000000000083
Oct 8 02:30:01 slash QEMU[3038]: extra data[3]: 0x0000000812968fe0
Oct 8 02:30:01 slash QEMU[3038]: extra data[4]: 0x0000000000000002
Oct 8 02:30:01 slash QEMU[3038]: RAX=0000000812968008 RBX=fffffe0010473090 RCX=00000000c0000101 RDX=00000000ffffffff

To confirm you're seeing this issue, make sure that Suberror is 3 (which means KVM_INTERNAL_ERROR_DELIVERY_EV) and extra data[1] is 31 (indicating that the VM exit reason was EXIT_REASON_EPT_MISCONFIG). The rest of the fields may vary, but those two must match those values.

Best,

Joe
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!