Migration of servers to Proxmox VE

Hi,

Is it possible to migrate a Virtuozzo container to OpenVZ/Proxmox ve?
 
Should be ease to copy the root filesystem (tar). The config should also work for venet, but not sure about that.

- Dietmar
 
I've been having problems with this method and described my problems here.

Sorry for the double post, I saw this after I started the new thread.
 
Due to the severe problems which arised with hypervm I would like to migrate all hypervm vps-es to my proxmox servers. How can this be done? Both based on openvz.
 
Ok to those people moving either dedicated or fully virtualized servers to proxmox there is already some great info on this forum and in the wiki. I want to add two useful items. (windows specific)

1. You moved your server, everything works but when you go to add ip's to your network config in windows and it complains about already having the ip assigned to a network adapter, except that the network adapter is no longer present. This is one way to get around that.


  1. Click Start, click Run, type cmd.exe, and then press ENTER.
  2. Type set devmgr_show_nonpresent_devices=1, and then press ENTER.
  3. Type Start DEVMGMT.MSC, and then press ENTER.
  4. Click View, and then click Show Hidden Devices.
  5. Expand the Network Adapters tree.
  6. Right-click the dimmed network adapter, and then click Uninstall.
This allows you to see non-present devices and remove the old network card and it's config cleanly.

2. You need to move many many ip's from the old network card/server to the new network card/server, you could do it by hand or even by command line but it takes ages. Or you could do this.

Use the following command to backup your network configuration:
netsh interface dump > netcfg.dat

Use the following command to restore your network configuration:
netsh exec netcfg.dat

Takes a bit of time to do the restore but it's much much faster then doing it by hand.

-ndew
Rackster.com
 
Great Proxmox VE!

I'm new to Proxmox VE. In recent migration of physical servers (mainly windows), I met some trouble with selfImage and qemu-nbd as you guys mentioned in wiki.

Here is my trying:

1. Install SelfImage on the physical Windows machine
2. Execute the mergeide.reg
3. Create an new KVM container with a suitable disk size.
4. SSH to the Proxmox VE host exec qemu-nbd -t /var/lib/vz/images/xxx/vm-xxx-disk.qcow2
5. Start SelfImage on the physical machine, choose to image entire hard disk. On ouput file select NBD with PVE host IP and port 1024 as parameters. Click Start.

That's the source of my trouble due to the large disk size of the physical server. I set a cluster Proxmox VE with 290G disk space on master and node respectively. When I was trying to image the windows system, I found out the physical disk size of the windows was about 274G.

The switchers at my datacenter were mixed with 100M and 1G model, so the speed of imaging was quite slow, near 9.55MB/s. As a result, it may run overnight if I did selfImage without compressing. So I chose gzip option. ( with gzip option, selfImage worked much faster than before).

When imaging was complete, I press CTRL+C on the PVE console and Start Virtual machine in the web interface, but only got "Boot failed: not a bootable disk".

I tried different option, such as gzip(best), bzip2 with no success. I also tried to gunzip the image file created from qemu-nbd with an error.

At last, I tried to selfImage with gzip to a file on a samba server instead of nbd server, after done, I successfully gunzip the image file and made the Virtual Machine start!

Maybe it's better to hit this on the wiki to save time. I've experimented three days due to large size disk.
 
Hi,

I've got an existing openvz installation on debian lenny. Running fine. However I would like to have proxmox as my control panel for everthing. Is it possible to just install proxmox without it deleting or destroying anything? Or is an easy migration somehow possible?

Thanks.
 
So this won't overwrite any existing configurations or installations of my server?
 
So this won't overwrite any existing configurations or installations of my server?

if you install the packages, they will be installed as defined. if you are unsure, pls test it in a testenvironment if you can manage it.
 
Thanks for your reply.

My only point is that it shouldn't delete my existing containers or change my ip configuration. That's all :)
 
Works wonderful. Just installed proxmox on my existing installation and I can control my openvz containers now.

Thanks for your help
 
Great Proxmox VE!

I'm new to Proxmox VE. In recent migration of physical servers (mainly windows), I met some trouble with selfImage and qemu-nbd as you guys mentioned in wiki.

Here is my trying:

1. Install SelfImage on the physical Windows machine
2. Execute the mergeide.reg
3. Create an new KVM container with a suitable disk size.
4. SSH to the Proxmox VE host exec qemu-nbd -t /var/lib/vz/images/xxx/vm-xxx-disk.qcow2
5. Start SelfImage on the physical machine, choose to image entire hard disk. On ouput file select NBD with PVE host IP and port 1024 as parameters. Click Start.

That's the source of my trouble due to the large disk size of the physical server. I set a cluster Proxmox VE with 290G disk space on master and node respectively. When I was trying to image the windows system, I found out the physical disk size of the windows was about 274G.

The switchers at my datacenter were mixed with 100M and 1G model, so the speed of imaging was quite slow, near 9.55MB/s. As a result, it may run overnight if I did selfImage without compressing. So I chose gzip option. ( with gzip option, selfImage worked much faster than before).

When imaging was complete, I press CTRL+C on the PVE console and Start Virtual machine in the web interface, but only got "Boot failed: not a bootable disk".

I tried different option, such as gzip(best), bzip2 with no success. I also tried to gunzip the image file created from qemu-nbd with an error.

At last, I tried to selfImage with gzip to a file on a samba server instead of nbd server, after done, I successfully gunzip the image file and made the Virtual Machine start!

Maybe it's better to hit this on the wiki to save time. I've experimented three days due to large size disk.
Great work, keep posting
Cheers.
 
Migration from Debian Lenny or Etch Xensource VM (LVM) to KVM (LVM)

Heres is a quick info on how to migrate a Debian Lenny or Etch xensource 3.0.1 VM stored in an LVM to KVM.

NB : Maybe some of you won't find that "elegant"...but it works ;)

NB : on my Xen version I was unable to run amd64 linux versions, so I had to install a x86 VM under KVM

In my case, the main problem is the kernel used by the VM (called DomU) is located on the host (Dom0) and this kernel is a xen one... 2.6.18.4-xen for example.
I don't want this kernel under KVM (I'm even not sure it could work).
So we'll have to install under PVE a debian machine, keep it's /boot, /lib/modules/versionkernel directories and /etc/fstab
once the machine is installed, we may :
* clear all the partition but /boot,
* restore the backup from XenSource
* copy /etc/fstab and /lib/modules/versionkernel
* eventually modify the VM conf to have the same MAC address

I've done it successfully for Etch and Lenny Xen Dom0 migrations.
For Etch, I've installed a Lenny businesscard, it doesn't matter as far as we'll keep only fstab and /lib/modules/2.6.22 dir

I did it this way :
* Install under Proxmox VE a debian machine with the same total disk space (or less if you want..juste have enough GB to store all your data)
# create the same LVM as you had under Xen (if you want...)
# install no packages at all
# stop the machine when the installation is done and the VM is rebooting.[/INDENT]


* backup the data :
# on the Dom0 : stop the Dom0
# make a tar.gz of every LV used by the debian DomU and send it to a location seen by the Proxmox VE server.remember you don't need to backup the /boot as far as we'll use the one we've installed...

* Restore the data
# on the PVE server: mount the VM disk(s) in a directory and make a backup of /etc/fstab and /lib/modules/yourkernelversion => you'll need it later
# rm -Rf of the VM disks don't delete /boot!!!! and untar your Xen backup to the correct location
# replace the restored /etc/fstab with the one you've backuped from the VM
# restore the previously saved /lib/modules (you can delete the old xen ones)
# modify the MAC address in the PVE GUI to give the same as it was when the VM was under Xen...

you can start your VM...it should work ;)

Don't forget to tell Xen not to start the old DomU anymore ;)

Don't hesitate to report any issue..In case I forgot to note down some steps...

NB : if you think it could be useful to put it in the Wiki..I can do it
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!