Cannot migrate VMs from Xen Server to Proxmox

edzilla

Member
Apr 12, 2021
8
2
8
39
Hi,
I've setup a Proxmox cluster on which we are planning to migrate all the VMs from our out of support Xen Servers.
I've been using this script to export and extract the disks directly on proxmox, and for the most part, it works fine.

My issue is that I can only import a single disk from a VM. Any additional disks come without a partition table and the VM does not boot.

I've also tried setting up a FOG server, but I get the same issue, which makes me think it's not a Proxmox issue.

Has anyone had any similar issue moving VMs from Xen to Proxmox?
 
My issue is that I can only import a single disk from a VM. Any additional disks come without a partition table and the VM does not boot.
I am curious how, if you successfully imported a boot disk, adding a second disk, even if its raw, prevents the VM from booting. Can you detach the disk and VM starts booting?
What is your target storage format? What is XENs source format? When you import the disk, does it go through qcow transformation?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
That's not exactly what happens: all the VMs I import with a single disk boot fine and work fine.
If I import a VM with multiple disks, they either boot but the additional disks are not accessible, or they don't boot at all.

I believe I've narrowed it down to LVM issues caused by devices changing names (xvda in Xen becomes sda in Proxmox, xvdb becomes sdb, etc)

I'm trying to see how I can repair that, either in the recovery environment of the VM or with libguestfs.
 
I am curious how, if you successfully imported a boot disk, adding a second disk, even if its raw, prevents the VM from booting. Can you detach the disk and VM starts booting?
What is your target storage format? What is XENs source format? When you import the disk, does it go through qcow transformation?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
So if I boot into a live CD in proxmox, all the LVMs are available, I can chroot into the OS.
If I boot the imported disk, it can't find the LVM and boots into rescue mode
 
Perhaps the formats between the two systems are not directly compatible. Could be a block size issue, or conversion procedure problem.
You didnt elaborate on the source or destination formats that you use, nor on the procedure for conversion.
I'd test with a known good image, ie one of the cloud images. Perhaps Cirros. Install it on Ovirt then try to export/import.
Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Perhaps the formats between the two systems are not directly compatible. Could be a block size issue, or conversion procedure problem.
You didnt elaborate on the source or destination formats that you use, nor on the procedure for conversion.
I'd test with a known good image, ie one of the cloud images. Perhaps Cirros. Install it on Ovirt then try to export/import.
Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
The VM I'm currently working on is a Oracle Linux 7 machine.
I'm not sure what you mean by format? I export the VM from Xenserver as an XVA file, use https://github.com/guestisp/xen-to-pve/tree/master to convert it to a raw file and then import this raw file with "qm importdisk"
 
I'm not sure what you mean by format? I export the VM from Xenserver as an XVA file, use https://github.com/guestisp/xen-to-pve/tree/master to convert it to a raw file and then import this raw file with "qm importdisk"
Alright, so you are using a 6 year old shell script that seems to retrieve the entire storage structure of the VM in a tar format, then unpacks it and writes it out as raw.
Its possible that the metadata that wraps the disks around has changed in that period of time (between XS versions) and what this script does is no longer appropriate.

You have a few options : a) Contact the author with bug report b) review all the forked projects to see if anyone has improved/updated the procedure c) break down the steps that script is doing and closely examine the resulting tar file, its structure and the output that it produces.
You may need to make changes if you pursue (c).

If this is a business environment and you need help, you may want to engage a Proxmox Partner or simply an experienced Linux sysadmin.

It doesnt look like this is a PVE issue, but rather the conversion procedure problem that is done by 3rd party tools.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Alright, so you are using a 6 year old shell script that seems to retrieve the entire storage structure of the VM in a tar format, then unpacks it and writes it out as raw.
Its possible that the metadata that wraps the disks around has changed in that period of time (between XS versions) and what this script does is no longer appropriate.

You have a few options : a) Contact the author with bug report b) review all the forked projects to see if anyone has improved/updated the procedure c) break down the steps that script is doing and closely examine the resulting tar file, its structure and the output that it produces.
You may need to make changes if you pursue (c).

If this is a business environment and you need help, you may want to engage a Proxmox Partner or simply an experienced Linux sysadmin.

It doesnt look like this is a PVE issue, but rather the conversion procedure problem that is done by 3rd party tools.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thanks, but it still seems to be a PVE issue: I've done a lot of testing and it seems that I was looking in the wrong direction: the culprit is memory hotplug.
If I disable it, the VM boots fine. The CPU type does not seem to make a difference.
The error I get on boot is this:
1706130688683.png
 
So I have upgraded that VM to Oracle Linux 9 and I can now enable the memory hotplug option without crashing the VM.
It seems there is an incompatibility between Oracle Linux 7 and that option in some cases.
Thank you so much @bbgeek17 I couldn't possibly have done it without your oh so helpful comments.
 
  • Like
Reactions: bbgeek17
memory hotplug need an udev rule for older kernel (<4.7)

https://pve.proxmox.com/wiki/Hotplug_(qemu_disk,nic,cpu,memory)

to enable at boot pluggable dimm.

(when memory hotplug is enable, you have a small static memory , like 512MB + other memory virtual dimm offline by default)
I added those rules (for CPU and memory) but it still failed to boot.
It wasn't an issue of missing memory I believe, as it failed to find assemble the LVM volumes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!