VM not booting

WebHostingNeeds

Renowned Member
Dec 4, 2015
19
1
66
I am moving KVM VM to proxmox server. I done this for 2 other VM and it did work with out any problem.

Code:
root@server70:/var/lib/vz/images/103# qemu-img info vm-103-disk-1.raw
image: vm-103-disk-1.raw
file format: raw
virtual size: 14G (15032385536 bytes)
disk size: 14G
root@server70:/var/lib/vz/images/103#

root@server70:/var/lib/vz/images/103# parted vm-103-disk-1.raw print
Model:  (file)
Disk /var/lib/vz/images/103/vm-103-disk-1.raw: 15.0GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type      File system  Flags
1      1049kB  256MB   255MB   primary   ext2         boot
2      257MB   15.0GB  14.8GB  extended
5      257MB   15.0GB  14.8GB  logical                lvm

root@server70:/var/lib/vz/images/103#

This one fail at booting.

Code:
VFS: Unable to mount root fs on unknown vm-block(0,0)

EuyqydO.png


any idea how to fix this ?
 
Its likely you wouldn't have been able to boot it in the old system either.Your initrd is either missing the lvm module or in the wrong location on the LVM partition.

Basically you can hook up a rescue CD with the same bitness (32-bit or 64-bit) as the system that is on this disk

For example: http://www.sysresccd.org/Download

and then run through the manual below, or try the automagic Boot an existing Linux system installed on disk option from the rescue CD above and then run something like update-grub2 to fix the boot:

http://superuser.com/questions/1111...chroot-to-recover-a-broken-linux-installation
 
Last edited:
Its likely you wouldn't have been able to boot it in the old system either.Your initrd is either missing the lvm module or in the wrong location on the LVM partition.

I was able to stop and start vm on old server. I tried this even after this problem.

How i imigrate the VM was by creating a new VM in proxmox, then replace the .raw file with .raw file from old KVM installation. The disk size i set in Proxmox is not eaxctly same, there may be a approx 1 GB diff, can this make any differnce ?

Basically you can hook up a rescue CD with the same bitness (32-bit or 64-bit) as the system that is on this disk

I fixed the issue by booting with Ubnunt server boot CD. Then rescue, i found out the /boot was full. I don't know how it boot in old server, but it did.. On new server, the VM have no internet access (as i have to login and change some network settings).

The problem was with latest kernal, that was not installed properly or somthing, i was able to boot with an older kernel.

Then remove old kernals, now i am able to access the VM with out any issue. After making free space, installed latest kernel, it did work poperly.
 
Great. I overlooked some possibilities for the boot failing. It may have been that the previous system remembered the previous correct boot inside grubenv but wouldn't use this on the new KVM through a difference of disk controller between the two KVMs leading it to use the default newest kernel. Another possibility is that a kernel cmdline had been included with the previous KVM that contained some explicit memory regions or hardware probes not working on the new KVM.

I have also seen the /boot full problem sometimes, especially if the system uses automatic upgrades.