Ah, yes! I must admit I haven't done a project with Xenserver for a few years now (more than a few, really) so my brain happily has forgotten about the paravirtual VMs there. You are correct, this makes sense that there are problems.
I did quick google, and found a few things, this one possibly has good baseline suggestion even if the core is not relevant:
http://updates.virtuozzo.com/doc/pcs/en_us/virtuozzo/6/current/html/Virtuozzo_Users_Guide/33405.htm
basically suggests you need to
-- make a copy of your VM, so you have a throw-away place to work. (well, they didn't suggest that, I do!)
-- boot the VM in its Xenserver environment, remove and replace the kernel with a 'non-paravirtual' version of guest OS kernel
-- make sure you can boot up in your source environment
-- then do your migration out. (ie, copy disk image to proxmox while VM is turned off, etc)
I'm guessing that figuring out how to cope with pygrub, if that was a 'custom piece of the PV VM template" may also be part of your fun.
Other possible scenarios?
- This thread, has some similar scenario discussion, abeit different endpoint (HyperV, ugh) but clearly the problem is the same - move away from a XenPV VM to a non-PV VM environment.
https://serverfault.com/questions/754680/converting-linux-machine-from-xenserver-to-hyper-v
I believe the nub of it comes down to the statement, "You need to replace the PV kernel (kernel-xen) in the VM with the pae kernel (kernel-pae)."
Possibly another way to approach this entirely, which may in fact be viable (or possibly is just a 'fun exercise of learning and frustration, I am not certain"). I know there are ways to migrate OpenVZ <> LXC (this was a requirement .. to help make people less unhappy in the world of Proxmox, when Proxmox moved away from OpenVZ Based 'containers' and adopted LXC instead...) - and in theory, you can maybe use a similar approach, ie, tarball and strategic 'empty destination pre-built' as the recipient of your migrated 'stuff'. More or less conceptually might be on par with this discussion,
https://askubuntu.com/questions/680...ce=google_rich_qa&utm_campaign=google_rich_qa
where they are looking to move a Physical (albeit Ubuntu non-VM host) into LXC.
But in practice this is not so different from what you want, ie,
- preserve your app stack layer and much of the OS which supports it
- ditch the 'hardware specific' bits of your OS and allow suitable new-environment pieces to be in that role instead.
You might in fact want to use LXC instead of KVM_VM as your target on Proxmox, because ? In theory? it will provide you with better VM Density / resource use / less 'waste' for the 'virtualization / abstraction' layer. Albeit, many people are very happy to accept a few %% of 'so-called-waste' for the 'simplicity of just doing everything in KVM VMs' and not fussing around with LXC.
But I have actually started using LXC in proxmox now, nearly as much as I used to use OpenVZ containers (after some initial reluctance, clearly - my own personal 'caution') - and I must say that I'm perfectly happy with the LXC Containers. "They just work". which is exactly what I think is very important
So, this is a bit rambling, and definitely by no means a definitive "here you got, follow steps 1,2,3 and you are done". But I am guessing maybe? between these ~2 different approaches to the issue, you can find a path that works.
Definitely if you find a path, please post back a summary to the thread so we can all learn from your work!
Thanks,
Tim