KVM VE not booting after migration to newer proxmox

mbraun

New Member
May 28, 2011
8
0
1
Hi all,

i've been using Proxmox for some time now and i'd like to thank you for all the great work, it's really a exceptional nice piece of software.

However, from time to time even the best software drives me mad. We've two servers running PVE and using KVM as guests. Configuration is quite straight forward, guests use virtio for disk and network interfaces, raw as disk format, Debian Squeeze as guests. It's working just fine on Machine A.

Code:
libpve-storage-perl                               1.0-17
pve-firmware                                      1.0-11
pve-kernel-2.6.24-12-pve                          2.6.24-25
pve-manager                                       1.8-17
pve-qemu-kvm                                      0.14.0-3
vzctl                                             3.0.26-1pve4
proxmox-ve-2.6.24                                 1.6-26

On Machine B, it's not possible to start any KVM migrated from Machine A. The boot process shows "Not a bootable disk" just after the BIOS post. Creating a KVM on Machine B works nice, but while installing the OS, /dev/vda cannot be read - the Debian installer gets stuck with error messages

Code:
libpve-storage-perl                               1.0-17
pve-firmware                                      1.0-11
pve-headers-2.6.32-4-pve                          2.6.32-32
pve-kernel-2.6.32-4-pve                           2.6.32-33
pve-manager                                       1.8-17
pve-qemu-kvm                                      0.14.0-3
vzctl                                             3.0.26-1pve4
proxmox-ve-2.6.32                                 1.8-33

So, whats the issue here? I can't run KVM on the 2.6.32 branch, even ones created on this system. Does anyone have an idea how to resolve this?

Thank you!
Martin
 
post the output of 'pveversion -v' of the problematic box.
 
Thanks for the quick reply!

Code:
pve-manager: 1.8-17 (pve-manager/1.8/5948)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.8-33
pve-kernel-2.6.32-4-pve: 2.6.32-33
qemu-server: 1.1-30
pve-firmware: 1.0-11
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.26-1pve4
vzdump: 1.2-12
vzprocps: 2.0.11-2
vzquota: 3.0.12-1dso1
pve-qemu-kvm: 0.14.0-3
ksm-control-daemon: 1.0-5

Martin
 
looks fine. post the VM config (cat /etc/qemu-server/VMID.conf) of the not working squeeze guest.

and give details about the hardware (cpu/mainboard).
 
CPU is a Intel Core 2 Quad 8300, Board is a Intel DG35EC, Storage controller is a Areca ARC-1210 doing RAID5. /var/lib/vz is located on the RAID5 while the system runs on a dedicated SATA disk.

This is the config file of a new machine that fails to detect /dev/vda while installing Squeeze:
Code:
name: foo.bar.de
ide2: local:iso/debian-6.0.1a-amd64-netinst.iso,media=cdrom
vlan0: virtio=2A:4C:92:3B:7C:DD
bootdisk: virtio0
virtio0: local:118/vm-118-disk-1.raw
ostype: l26
memory: 512
onboot: 1
sockets: 1

The Debian installer finds /dev/vda, detects it size but when trying to create partitions, kernel throws an I/O error.
screenshot2011-05-28at3ufs.png
 
Last edited:
your config looks fine. any hardware issue with the controller/disks? run tests to validate.
 
Well, i checked the log files of the host - no I/O related error message ran across. The system that works fine also has a arcmsr controller (ARC-1680) but a more legacy kernel version. Might this be a issue of the arcmsr module of 2.6.32-pve? It's quite strange that the error only occurs at the host<->guest side but not at hw<->host. Also it's 100% reproducible while other machines (OVZ) on that host work really fine. I moved some hundred GB of data around on this machine without a single I/O or checksum error.
Thanks for your support!
 
strange, try 2.6.35 or (2.6.18 if you need also OpenVZ) - any difference?
 
and as its an quite old board, make sure you run the latest mainboard bios. No idea if it helps but I remember also some quite weird issues with buggy desktop bios and KVM.
 
Hi,

we managed updating to 2.6.35 tonight. Still, not detecting the disk image correctly but throwing other error messages now. Since the drive is connected through an separate RAID controller i doubt the BIOS is the scapegoat here, but i'll try that too.
hda.png
 
and test with the latest KVM 0.14.1 (from pvetest repository).