XenServer to Proxmox

Alessandro 123

Well-Known Member
May 22, 2016
653
24
58
40
This was already discussed tons of time, but all thread are very old

I have to convert about 150 XenServer VMs (all PV) to proxmox. Considering the number of VMs I have to migrate, I need to automate as much as possible

I know that I have to install a kernel and grub in each VM before migrating it , but how Xenserver react to this? If migration should fail, I need to power on the VM on xenserver host. The newly installed kernel and bootloader would create any issue or they are simply ignored by xen?

In addition to xenmigrate.py , do you know any other procedure or script that will automated the migration? Like exporting, converting, creating new VM on proxmox and so on...

Should I write something similar on my own? I don't want to reinvent the wheel

In case I have to write something, could you please point me to proper PVE API for creating new VMs by using existing disks ? (The ones converted via xenmigrate)

Xenmigrate will create raw images, as I would like to use qcow2, I would need as additional convertion step from raw to qcow2 that will increase downtime a lot (some VMs are about 300GB)
Can I create a VM by using raw disks and then converting them to qcow2 on the fly and online? If yes, how?

Storage, for some VMs (not all) would be ZFS, should I take care of something when converting VMs? Probaby i won't use qcow to avoid COW over COW filesystem
 
Hello,

You can install kernel and grub easily to Xen PV, it will not cause any issue in Xen. However, if you want to install kernel before migration, don't forget to enable (force) virtio modules (or other devices what are you configured) in initrd.
 
In using debian
Will these modules break something in PV under xen?

Can I add these after the migration or the migrated machine won't boot without these modules?
 
No, its not break anything. If the module not need, kernel not loading it. You need to regenerate initramfs with these module befoure migratione, without that basically linux will not have driver to virtio devices.
 
So, no more suggestions?
Which is the best procedure to migrate a running XenServer VM (i can poweroff it for a while) to a Proxmox node with ZFS ?
 
If you manage to get a disk image from Xen , remember you can use
qm importdisk (see qm help importdisk)

If you have a OVF you can use
qm importovf (see qm help importdisk)

For Xen HVM domUs the disk image method should work.
For Xen PV domUs, YMMV, if you only have a partition you need to put that on a bootable disk image before.
 
  • Like
Reactions: chrone
I'm able to export a XenServer VM, creating a huge XVA file.
I only have PVs, I can add grub and a linux kernel, then exporting the VM as XVA, but my biggest concern is the total time:

some hours are needed to export the VM as XVA (due to a very stupid and nonsense ratelimit imposed by XenServer < 7.1) , some time is needed to extract the XVA, some hours are needed to convert from single "chunks" to a huge raw image, some time is needed to upload the whole raw image to proxmox host.

And after that, how can I convert the raw image to ZFS pool ? I would like to keep downtime as short as possible. Is this the only way ?

Anyway, which image format are supported by "qm importdisk" ? I didn't find any reference in man page.
 
you can use "qm importdisk", and/or the usual move disk functionality if the disk is already referenced in the VM config.
 
Ok, so, I create the new VM with a temporary disk (let's say, 1GB), then I remove the whole storage configuration from the VM and after that I'll run "qm importdisk" specifying the VM id ?
 
Just because I had to do this just this weekend some background info and additional hints from me which hopefully will help others:

I did not realise until very recently that Citrix has cut the feature set of their XenServer Free Edition with version 7.3 (see their post "XENSERVER 7.3: CHANGES TO THE FREE EDITION" on xenserver.org). This and the deepening dependency on Citrix itself (you need a Citrix ID to download patches etc.) made me reconsider my choice of (free) hypervisor. Coming from Linux and especially Debian I chose Proxmox (else oVirt might have been an alternative). This is good for my use cases (mostly some single servers for little production or lab use). What is more it brought some features out of the box I had to script manually for XenServer (autostart VMs, backups).

But now I was confronted with the migration. First I thought an export to XVA would be the way of choice. But it works better and faster (at least for me):

Just use the VHD-files containing the XenServer VM harddisks directly! You find them on XenServer under /var/run/sr-mount/<SR-UUID>/<DISK-UUID>.vhd. These you can, after creating a new VM on Proxmox via GUI or else, directly import via command line even to your LVM thin-provisioned datastore on Proxmox: "qm importdisk <id_of_your_vm> /your/storgae/<DISK-UUID>.vhd local-lvm"!

This is just awesome - thanks a lot for the great work! :) I will consider buying support from you whether I need it or not.
 
  • Like
Reactions: chrone
Hi Alessandro we've used XenServer for a least 10yrs (before Xensource) and are going to move to Proxmox.
Our VM's are on Equallogic storage using iSCSI.
Did you manage to export/import the Xenserver VM's?
I've got the same situation, big VM's with very little downtime etc.
 
Hi,

We was in the same situation few month ago. After some testing, we decided to made the migration on application level instead of migrating VM.

So, instead of migrating the whole VM, we installed new CentOS 7 vms, (old infrastructure on xen was centos6) and we redeployed everything on that.
 
I'm in the same boat- moving from XenServer to ProxMox. I've migrated my XenServer VMs to NFS (which Proxmox can also see). However, when I try to do the qm import of the XenServer VHD files, Proxmox won't boot them. I've tried multiple Windows and Linux vhds. Anyone run into this and know the solution?

One other thing, not to be off-topic, but who should I email to make a submission to the Wiki? I got MPIO iSCSI working last night with my Dell Equallogic SAN, but the instructions in the Wiki were incomplete for my situation (The EQL does not support multiple subnets, which I think is part of the problem). The solution was to use iscsiadm to create an interface for each iSCSI ethernet interface so it binds them (then ifconfig shows even balance of traffic). Anyway, I'll be setting up another Proxmox node tonight so I'll document it from start to finish for the Wiki if the Proxmox team is interested.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!