XVA to QCOW2 migration

Daniel Zuwala

New Member
Dec 30, 2016
5
0
1
45
I have a xenserver with several vms and I would like to migrate them to my new Proxmox server. For now, I have done what I have read here and elsewhere (xva -> raw -> qcow2), but for some reason it doesn't work as expected. The vm boots. The grub loader works where I can choose to boot my Ubuntu server. But then it hangs and nothing can be done. The CPU shows a 90% usage and on the screen I have :

Code:
SeaBIOS (...)
Machine UUID ...
Booting from Hard Disk

The VM is a Ubuntu server 16.04. The disk is fully encrypted. Could it be a problem ? What should I do ?

Thanks.
 
On linux this is easy. Nothing is to convert. You can use Clonezille to migrate the vm online. Or you can use rsync: Start on both VMs with an linuxlivecd, then copy everthing from xen to new VM. But yes... fstab must be adapted. Run Grubinstall on the new vm. Then everything should booting up normaly. Don't forget do remove xendrivers, before the first boot.
 
Hum, I tried clonezilla, but it doesn't work on my vm. It seems that all my VM are in PV mode and that means that I must change them in HVM before exporting them, but I cannot find any reliable method to do it. Anyone has an idea ?
 
Anyone has an idea ?
Yes. The default linux method works always. Start on every vm an livecd and copy the system with rsync, write grub on the target, change fstab if needed, reboot and the system should running.

What do you mean with PV and HVM mode?
 
PV means paravirtualized and HVM hardware virtual machine . From what I've understand so far, in PV mode, the vm has no boot partition and boots from the xenserver, in HVM, things are more usuals. That explains why my conversion from xva to raw were not able to boot.

I know there's still the rsync solution, but I'm not very confident with this tool, and I'm not sure of what I should do. I've seen some people exclude some directories from rsync, some do the rsync in a live vm and you seems to recommand to do it offline, so I'm not sure what I should do exactly.
 
Ok, no problem... first boot every vm with an livedisk, use clonezilla for easy to use. Enter on both machines to root's CMD, on the target add your partitions and filesystem's (for every partition use an extra virtual disk). Then go back to xenwindow and copy with the following command:
Code:
rsync -aPvze ssh /* target.local:/.
After you copied all things enter on the target into chroot and install the metapackage "ubuntu-server". After that you display with "blkid" your disks, with this information setup your fstab. Dont forget to write grub:
Code:
grub-install /dev/sdX
Depending on your bootdisk.
 
Thanks. Finally, I've be able to boot my ubuntu vms. All I have to do was removing some grub options like "console=hvc0" and "splash quiet" and then the vms were able to start. And if I wanted this change to be persistent, I had to modify in the same way the file /etc/defaut/grub and run update-grub.

I definitively keep in my your solution, and want to be able to use it.

Thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!