Migrate VM from Xen to Proxmox

Moez

New Member
Jun 10, 2018
3
0
1
37
Hi All,

I want some help to migrate a Debian/ubuntu VMs from Xenserver to proxmox.

Does someone have an idea about this migration and give me the steps to run VMs under Proxmox.
 
Linux is really easy. Make a new VM on PVE, start a livedisk on boht machines (xen an pve). Copy everything with rsync and write grub. Remove all special xendrivers... Configure the the vm with the optimum virtual hardware. You can also use dd, qm import and clonezilla.

What linux do you have, what kernel?
 
Thanks fireon,

I have Debian 6 squeze and another VM: Linux 2.6.32-5-xen-686 #1 SMP Mon Oct 3 09:00:14 UTC 2011 i686 GNU/Linux
PLZ give me more details about the operation because I used xenmigrate.py without any result.

Thanks again
 
  • Like
Reactions: shantanu
Thanks fireon, I will do it with clonezilla.

@Alessandro 123 After the import of VM, I need to install Grub ?
With clonezilla not, install grub is only needed with manual copy with rsync. Don't forget to correct fstab... depending on your virtual hardware.
 
If your VMs are PV, yes, you have to install grub (if not already present) beacuse PV VMs doesn't have a bootloader (boot process is done directly with PyGrub from the Xen host)

Usually, all debian VM still have a proper bootloaded loaded and I don't have to install anything (except virtio drivers) to use my script. Just poweroff the VM, run the script and import disks in PVE
 
What does this mean . "if your vms are pv" ? I've used Citrix Xenserver for years and never come across PV other than as in LVM context - and I don't see the relevance of that here.

I'm testing moving from citrix to proxmox, and I've gotten as far as to boot the vm, but it just loops. I see above, "you have to install grub". How do I do that on a vm that won't boot?

Edit: it's a standard Centos 7 container, no special boot parameters.

upload_2019-7-13_16-28-47.png

Ok, well somehow I managed to fix it by luck I guess. I noticed the IDE device was not attached in the GUI. Not sure why it would not do that automatically, seeing there was one, but there we go. It still didn't work though. After some digging I found that the file /etc/pve/local/qemu-server/100.conf contains the configuration details, and it it was a "[PENDING]" line, and the IDE disk was listed under that. I deleted the line, restarted the VM and voila, it worked.

Now, I like the idea that Proxmox is "hackable" like this. It's possible to dig in and find files that allow you to fix this. In Xenserver it is a nightmare, with everything hidden in long codes that nobody can read/remember. But having said that, I am a little frustrated that information here seems to be scattered high and low. It would be nice with a clear and concise migration guide explaining all the steps, not just "just use whatever!" "just install grub" without much explanation. It looks like the kind of documentation people write on projects just before going on holiday, i.e. "yeah I have to write it, but this will do". Yes, once I understand what I'm doing here, I will do my best to provide some better guides.
 
Last edited:
Here's my way of migrating xen vm's to pve hosts, I've done a few hundred so far and have had very few issues.

Download and prepare xen-to-pve script
Code:
apt-get install git
cd /tmp
git clone https://github.com/guestisp/xen-to-pve.git
chmod +x /xen-to-pve/xva-conv.sh

Make sure you have http access to the xen host
Code:
curl http://192.168.178.125

Shut down VM on Xen host.

Create VM, copy, convert and import disk from Xen host to PVE host.
Code:
qm create 345 --name *vm-name* --acpi 1 --agent 0 --bios seabios --boot cdn --bootdisk virtio0 --onboot yes --cores 2 --memory 2048 --net0 virtio=*MAC-address*,tag=*vlantag*,bridge=*network bridge* && cd /tmp/xen-to-pve && wget --http-user=*Xen-User*--http-password=*Xen-Pass* http://*Xen.hostname.nl*/export?uuid=*Xen-VM-UUID*  --limit-rate=*speedinmb*m -O - | tar --to-command=./xva-conv.sh -xf - && mv Ref* *PVE-VM-ID*.raw && qm importdisk *PVE-VM-ID* *PVE-VM-ID*.raw local-lvm && rm *PVE-VM-ID*.raw -f && qm set *PVE-VM-ID* --virtio0 local-lvm:vm-*PVE-VM-ID*-disk-0

You might want to dig a bit deeper into my oneliner since it's kind of a lot to process.
This oneliner creates a vm on pve with the desired specs without a disk, then downloads the xva file and converts it to .raw and imports it to the vm.

Just make sure you have enough space on the drive you're downloading the vm to.

The only issue I've had so far is vm's not booting due to grub settings.
Some had their boot disk set to xvda, wich becomes vda or sda depending if you choose virtio or scsi.
Other issues happen because there is an option "console=hvc0".

Either way the vm's always booted into grub. I changed settings by editing the settings at boot and made the changes persistent afterwards.
 
After many hours of trial-and-error yesterday, I found the easiest way for me was following:

1) Obtain the uuid of the VM to be transferred by running xe vm-list on the xenserver host
2) shut down the source VE
3) run this line on proxmox, set $PASSWORD, $HOST and $UUID as appropriate
wget --no-check-certificate --http-user=root --http-password=$PASSWORD xttps://$HOST-IP/export?uuid=$UUID -O - | tar --to-command=./xva-conv.sh -xf -
4) run this line on proxmox, set $ID to next free ID
qm create $ID
5) run this line on proxmox, using the $ID from (4). This of course assumes your local storage is LVM
qm importdisk $ID $VMFILE local-lvm

I made a small quick and dirty wrapper for this:

#
rm -f Ref*
wget --no-check-certificate --http-user=root --http-password=PASWORD xttps://192.168.1.4/export?uuid=$1 -O - | tar --to-command=./xva-conv.sh -xf -
mv -f Ref* raw
qm create $2
qm importdisk $2 raw local-lvm

After this is run, i needed to go to the PVE GUI and adjust cpu, memory, network and disk for the new VE. The CPU and memory should be straight forward. I found that if I added network before first boot, it would just try to boot from the network, but if I let it boot first, I could add the network and it would be ok. Not sure why this is so. The disk should be in the bottom of the hardware as unattached. Just attach it and go,

I was able to migrate 3 Centos-7 containers with this, using the VirtIO network adaptor in bridge mode. This would appear as eth0 in the VM's and worked out of the box. A Debian-9 container did not do so well. Whatever network device I use, all come up with other device names, not eth0 as on Xenserver. I got around that by editing /etc/default/grub modifying hte GRUB_CMDLINE_LINUX variable as follows:
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"
then "grub-mkconfig -o /boot/grub/grub.cfg" and a reboot and everything was working again.

EDIT: sorry, had to obfuscate links to xttps because of forum restrictions

Now for the next challenge: migrating a windows container....

... not surprisingly that completely failed. "Preparinig Automatic Repair" and then nothing. Fortunately I only have one Windows VM, and it can easily be rebuild from scratch.
 
Last edited:
found that if I added network before first boot, it would just try to boot from the network, but if I let it boot first, I could add the network and it would be ok.

Ok, that wasn't true actually. When I boot it still tries to boot from the network in an endless loop. I have to hit esc, then pick the disk to make it boot from the disk. Isn't there a way to tell Proxmox to do this automatically somewhere?
 
Isn't there a way to tell Proxmox to do this automatically somewhere?

Whilst creating the vm I set the boot order by using the following option:
Code:
--boot cdn --bootdisk virtio0
 
Ok, so what do you do when the VM is already created?

Ah, never mind, found it under options -> boot order.

PVE is pretty cool ;)
 
Last edited:
Ok, so what do you do when the VM is already created?
Great you found it in the gui, here's the commandline version as well:
Code:
qm set 100 --boot cdn --bootdisk scsi0
c: cd
d: disk
n: network
bootdisk is equal to the disk type and ID listed in the config.
 
What does this mean . "if your vms are pv" ? I've used Citrix Xenserver for years and never come across PV other than as in LVM context - and I don't see the relevance of that here.

For having used XenServer for that many years you should be aware of differents possible modes for VMs:
This could shed some light on it and help you understand why the question was asked: https://xen-orchestra.com/blog/xen-virtualization-modes/
 
For having used XenServer for that many years you should be aware of differents possible modes for VMs:
This could shed some light on it and help you understand why the question was asked: https://xen-orchestra.com/blog/xen-virtualization-modes/

Yeah that was a brainfart. DIdn't resolve PV to paravirtualized. I guess it happens when you get to a certain age.

How is Lausanne? I understand the cost of living there is one of the highest in the world, only beat by a few other Swiss cities? I understand how that can puts someone in a bad mood.
 
Yeah that was a brainfart. DIdn't resolve PV to paravirtualized. I guess it happens when you get to a certain age.

How is Lausanne? I understand the cost of living there is one of the highest in the world, only beat by a few other Swiss cities? I understand how that can puts someone in a bad mood.

Hahaha, yes that must be it. Yes I think Zurich or Geneva is even worse.
 
After having tried the migration of Debian VMs from Xenserver 6.5 to Proxmox in many different ways I always ended with VMs stuck on the message "Booting from Hard Disk ..."

Finally I found the reason for this and maybe it helps someone else:
In all my VMs on Xenserver there are entries "console=hcv0" in /boot/grub/grub.conf (like mbosma already mentioned earlier). If I remove this entries with "sed -i -e s/console=hvc0//g /boot/grub/grub.cfg" either on the running VM in Xenserver or after the migration in Debian rescue mode, the migrated VM boots fine ... until I do an "apt upgrade". This is because as a part of the upgrade proces also "update-grub" is run and this takes the configuration for grub from the /etc/default/grub file. So I also had to remove the "console=hvc0" entry in this file.

So, if you have a migrated VM hanging on "Booting from Hard Disk" you have to get into the bootdisk of the VM (i.e. by Debian rescue mode), run
sed -i -e s/console=hvc0//g /boot/grub/grub.cfg
and remove the line with console=hvc0 from the /etc/default/grub file.

It is surely easier to do this on the VM before migration.

By the way: for me the most comfortable way to migrate Linux-VMs is with Clonezilla, it automatically takes care of renaming the xvda's to sda's in grub.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!