Howto migrate from xen project?

tomx1

Active Member
Apr 26, 2018
12
0
41
42
We are running a few machines on xen project 4.8, all of them are running debian (version 7, 8, 9).

I'm looking for a way of how to migrate the existing xen project machines to proxmox? I was only able to find a (quite old) migration guide from xenserver which is not suitable for our use case: https://pve.proxmox.com/wiki/Xenmigrate

Does someone have a hint for me how to get our xen project machines running under proxmox?
 
Hi, there are various documents on 'migrating to proxmox' and conceptually it is all the same as 'migration from physical or VM-alternate-platform to proxmox'.

- if windows VM guest, make sure to install IDE reg support before doing migration
- it may be beneficial to remove vm-source-guest-specifici pieces before starting if possible (ie, I've seen cases where VM Guest running on VMWare has "VMware tools" installed and it is hard to uninstall VMWare tools .. once the VM is flipped to proxmox .. so maybe cleaner to scrub VMware tools / Xen Guest tools in your case" - prior to moving it over to proxmox
- get the blocks/VMDisk over to the proxmox host
- create a new 'placeholder' in proxmox who becomes the new VM
- hide his existing bare disk 'under the hood' which is blank, and swap in / rename your 'real disk image blocks' file in his place
- spark it up and let the happy days of proxmox operation begin.

I'm guessing it does require a bit of SSH / under the hood work - but none of it is hard. ie, I don't think you can do this purely via the proxmox GUI. Unless you want to do a method (which I have used)

-- create new VM in proxmox to meet requirements
-- spin up VM and boot with "Clonezilla" LiveCD
-- do same on your XenServer environment, boot up your <Source VM> using <Clonezilla LiveCD>. Then do a clonezilla based dump-clone, ie, dump the clonezilla onto the proxmox host itself into /tank/ via SSH for example based clonezilla access. Let the blocks pour across.
-- once all done, do a clonezilla restore and - all good.

This of course requires non-trivial downtime since you are pushing blocks across the NIC/wire but .. it works ..


Tim
 
  • Like
Reactions: GadgetPig
Hi Tim, thanks for your reply.

Well, it looks like it's is not that easy (for us). Right now we are using Paravirtualized guests which are booted via pygrub (with LVM as storage), so none of them is using a bootloader in the "classical" way. The thing is, i don't know how to create a MBR / Bootloader for a migrated PV guest on Proxmox.

I have already tried to dump the MBR of an existing Proxmox guest with a running debian installation without success.

Unfortunately i have not found any documentation / blog articles etc. of how to migrate to proxmox from a PV guest.
 
Ah, yes! I must admit I haven't done a project with Xenserver for a few years now (more than a few, really) so my brain happily has forgotten about the paravirtual VMs there. You are correct, this makes sense that there are problems.

I did quick google, and found a few things, this one possibly has good baseline suggestion even if the core is not relevant:
http://updates.virtuozzo.com/doc/pcs/en_us/virtuozzo/6/current/html/Virtuozzo_Users_Guide/33405.htm

basically suggests you need to
-- make a copy of your VM, so you have a throw-away place to work. (well, they didn't suggest that, I do!)
-- boot the VM in its Xenserver environment, remove and replace the kernel with a 'non-paravirtual' version of guest OS kernel
-- make sure you can boot up in your source environment
-- then do your migration out. (ie, copy disk image to proxmox while VM is turned off, etc)

I'm guessing that figuring out how to cope with pygrub, if that was a 'custom piece of the PV VM template" may also be part of your fun.

Other possible scenarios?

- This thread, has some similar scenario discussion, abeit different endpoint (HyperV, ugh) but clearly the problem is the same - move away from a XenPV VM to a non-PV VM environment.
https://serverfault.com/questions/754680/converting-linux-machine-from-xenserver-to-hyper-v

I believe the nub of it comes down to the statement, "You need to replace the PV kernel (kernel-xen) in the VM with the pae kernel (kernel-pae)."


Possibly another way to approach this entirely, which may in fact be viable (or possibly is just a 'fun exercise of learning and frustration, I am not certain"). I know there are ways to migrate OpenVZ <> LXC (this was a requirement .. to help make people less unhappy in the world of Proxmox, when Proxmox moved away from OpenVZ Based 'containers' and adopted LXC instead...) - and in theory, you can maybe use a similar approach, ie, tarball and strategic 'empty destination pre-built' as the recipient of your migrated 'stuff'. More or less conceptually might be on par with this discussion,

https://askubuntu.com/questions/680...ce=google_rich_qa&utm_campaign=google_rich_qa

where they are looking to move a Physical (albeit Ubuntu non-VM host) into LXC.

But in practice this is not so different from what you want, ie,

- preserve your app stack layer and much of the OS which supports it
- ditch the 'hardware specific' bits of your OS and allow suitable new-environment pieces to be in that role instead.

You might in fact want to use LXC instead of KVM_VM as your target on Proxmox, because ? In theory? it will provide you with better VM Density / resource use / less 'waste' for the 'virtualization / abstraction' layer. Albeit, many people are very happy to accept a few %% of 'so-called-waste' for the 'simplicity of just doing everything in KVM VMs' and not fussing around with LXC.

But I have actually started using LXC in proxmox now, nearly as much as I used to use OpenVZ containers (after some initial reluctance, clearly - my own personal 'caution') - and I must say that I'm perfectly happy with the LXC Containers. "They just work". which is exactly what I think is very important :)

So, this is a bit rambling, and definitely by no means a definitive "here you got, follow steps 1,2,3 and you are done". But I am guessing maybe? between these ~2 different approaches to the issue, you can find a path that works.

Definitely if you find a path, please post back a summary to the thread so we can all learn from your work! :)

Thanks,

Tim
 
  • Like
Reactions: GadgetPig
If your source server has proper disk layout and MBR partition table -- /dev/sda + /dev/sda1 layout, you can install kernel and grub inside source server before migration.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!