VM won't start after migration from PVE 2.3 to 3.1

check-ict

Well-Known Member
Apr 19, 2011
102
18
58
Hello,

We have installed a new 3-node cluster with the latest PVE 3.1

We moved allmost all VM's to the new cluster, however we noticed 2 VM's have problems with the boot drive. The VM's are both Server 2003. When I try to boot the VM on the 3.1 cluster, I get the error "Not a bootable disk". On the older PVE server (2.3) it works.

If I try to boot the damaged VM disk with a rescue CD, it won't show any data on the disk. Partition tools are not able to mount or recover the disk, it looks empty. I tried a chkdsk /F on the old server and copied the VM disk again, but still nogo.

How can I make my server 2003 VM's boot on the new PVE 3.1 nodes?

Old node:
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-96
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-14-pve: 2.6.32-74
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-7
vncterm: 1.0-4
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-10
ksm-control-daemon: 1.1-1

bootdisk: ide0
cores: 2
ide0: local:110/vm-110-disk-1.qcow2,size=20971480K
ide1: local:110/vm-110-disk-2.qcow2,size=32G
ide2: local:iso/gparted-live-0.9.0-7.iso,media=cdrom,size=100140K
memory: 2048
name: fitrex-rdp-iis
net0: e1000=32:D0:55:26:5E:03,bridge=vmbr0
ostype: wxp
sockets: 1

New node:
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1

bootdisk: ide0
cores: 2
ide0: zfs01:127/vm-127-disk-1.raw,format=raw,size=32G <--- doesn't work
ide1: zfs01:127/vm-127-disk-2.raw,size=32G <--- works
ide2: none,media=cdrom
memory: 2048
name: fitrex-rdp-iis
net0: e1000=32:D0:55:26:5E:03,bridge=vmbr0
ostype: wxp
sockets: 1

The storage is a NFS share to a ZFS fileserver.
 
Some extra information:

The config on the old server has "qcow2", but actually it's RAW. The PVE 3.1 version won't boot if we keep the disk name "qcow2" so we changed the config and moved the VM disk to change the name from qcow2 to raw.

If I start with Gparted, I do see the disk but it's empty (no label, partition etc.).
 
Some extra information:

The config on the old server has "qcow2", but actually it's RAW. The PVE 3.1 version won't boot if we keep the disk name "qcow2" so we changed the config and moved the VM disk to change the name from qcow2 to raw.

If I start with Gparted, I do see the disk but it's empty (no label, partition etc.).
Hi,
you can try to change the cache option of the drive from "none" to "write through" (and power cycle of VM).

Udo
 
I just found out a Server 2008R2 VM is also affected by this.

All the affected VM's have been migrated from Hyper-V to Proxmox 2.3 some time ago (I did "qemu-img convert disk1.vhd disk1.raw -f vpc" when I migrated)
 
...
ide0: zfs01:127/vm-127-disk-1.raw,format=raw,size=32G <--- doesn't work
ide1: zfs01:127/vm-127-disk-2.raw,size=32G <--- works
...
Strange!
Does it work on local storage?

What output shows file? Like (depends on your mount-point)
Code:
file /mnt/pve/zfs01/images/127/vm-127-disk-1.raw
file /mnt/pve/zfs01/images/127/vm-127-disk-2.raw
Udo
 
New server:
file /mnt/pve/zfs01/images/127/vm-127-disk-1.raw
/mnt/pve/zfs01/images/127/vm-127-disk-1.raw: QEMU QCOW Image (v2), 21474795520 bytes

Old:
file /var/lib/vz/images/110/vm-110-disk-1.raw
/var/lib/vz/images/110/vm-110-disk-1.raw: QEMU QCOW Image (v2), 21474795520 bytes

Funny that it still thinks it's qcow2.

I will try a qemu-img convert vm-127-disk-1.raw vm-127-disk-1.raw2 -f qcow2
 
file vm-127-disk-1.raw2
vm-127-disk-1.raw2: x86 boot sector; partition 1: ID=0x7, active, starthead 1, startsector 63, 41940929 sectors, code offset 0xc0

It works!!! Thanks Udo!

So after all, Proxmox 3.1 normally recognizes a qcow2/raw and won't let you boot if it's wrong. However, I've somehow tricked PVE 3.1 to think it really was a raw file and it would try to boot.

Or..

Proxmox 3.1 will only give error when trying to boot with raw disk that is named "qcow2", while it doesn't give an error when you boot with qcow2 named as "raw". Maybe this warning could be implemented.
 
file vm-127-disk-1.raw2
vm-127-disk-1.raw2: x86 boot sector; partition 1: ID=0x7, active, starthead 1, startsector 63, 41940929 sectors, code offset 0xc0

It works!!! Thanks Udo!

So after all, Proxmox 3.1 normally recognizes a qcow2/raw and won't let you boot if it's wrong. However, I've somehow tricked PVE 3.1 to think it really was a raw file and it would try to boot.

Or..

Proxmox 3.1 will only give error when trying to boot with raw disk that is named "qcow2", while it doesn't give an error when you boot with qcow2 named as "raw". Maybe this warning could be implemented.
Hi.
In the config one hdd has raw as format, but the image isn't raw - it's an qcow2-file which only named as raw. So the mistake is not on the pve-side.
Of course they can do checks and give proper error messages, but this can only happes if someone do it by hand (and then they should know what she/he is doing).
The old config does not have an format-entry so kvm use the right format (autodetection).

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!