Online migration failed

Onyx

Renowned Member
Jul 31, 2008
67
0
71
Just testing with the standard debian Virtual applicance and 2 pve's, and online migration failed.


/usr/bin/ssh -t -t -n -o BatchMode=yes 192.168.1.245 /usr/sbin/vzmigrate --online 192.168.1.244 101
tcgetattr: Inappropriate ioctl for device
OPT:--online
OPT:192.168.1.244
Starting online migration of CT 101 to 192.168.1.244
Preparing remote node
Initializing remote quota
Syncing private
Live migrating container...
Error: Failed to suspend container
Connection to 192.168.1.245 closed.
VM 101 migration failed -
 
Just testing with the standard debian Virtual applicance and 2 pve's, and online migration failed.


/usr/bin/ssh -t -t -n -o BatchMode=yes 192.168.1.245 /usr/sbin/vzmigrate --online 192.168.1.244 101
tcgetattr: Inappropriate ioctl for device
OPT:--online
OPT:192.168.1.244
Starting online migration of CT 101 to 192.168.1.244
Preparing remote node
Initializing remote quota
Syncing private
Live migrating container...
Error: Failed to suspend container
Connection to 192.168.1.245 closed.
VM 101 migration failed -

yes, a known issue on OpenVZ Kernel 2.6.24.

see
http://bugzilla.openvz.org/show_bug.cgi?id=850
http://bugzilla.openvz.org/show_bug.cgi?id=881

the more people push the OpenVZ team to fix this probably the sooner they fix it.
in the meantime, just do offline migration, this is just a few seconds downtime.
 
Hi,

I would like to get live migration working for my OpenVZ VE's

I would like to demonstrate the feasibility of using PVE in our production environment, and live migration is attractive as we have downtime SLA's in place.

I am not interested in KVM for now, so I guess I could operate a 2.6.18 pve kernel if that would not cause any other issues.

Otherwise, as it seems those bugs are fixed, would it be very difficult to build a PVE kernel with the patches applied?

Otherwise, I'll just wait.

Thanks,

James
 
AFAIK those bugs are still not fixed.

Ok, my mistake. Thanks for the quick reply. What about a PVE optimised 2.6.18 kernel?

Is there such a thing around or has PVE always use 2.6.24?

What if I installed the 2.6.18 ovz kernel on my PVE boxes?

Many thanks,

James
 
See, we do not test with 2.6.18, nor does we have a 2.6.18 kernel with our patches applied. I do not know what things will not work, but I guess there are many.

- Dietmar
 
See, we do not test with 2.6.18, nor does we have a 2.6.18 kernel with our patches applied. I do not know what things will not work, but I guess there are many.

- Dietmar

Just out of interest, I moved our cluster over to the fzakernel-2.6.18-amd64 kernel, and ran some tests.

At first I got the same error trying the web based online migration, so I ran the vzmigrate command on the shell with the -v switch, and it showed

Error: can't open file /dev/ptyp0

Using the work around here solved the problem:

http://www.proxmox.com/forum/showpost.php?p=6875&postcount=2

and now live migration works for me. So far all the things I need to work in PVE are working great, but I'll keep you posted about any problems/implications.
 
i have same situation.
i migrate zenoss (from proxmox apliance), and get the same error.

/usr/sbin/vzmigrate --online 128.46.8.33 103
OPT:--online
OPT:128.46.8.33
Starting online migration of CT 103 to 128.46.8.33
Preparing remote node
Initializing remote quota
Syncing private
Live migrating container...
Syncing 2nd level quota
Error: Failed to undump container
vzquota : (error) Quota is not running for id 103
VM 103 migration failed -
 
i did not configure any NFS service. i dont know if proxmox does.

my situation here is:
i install two fresh install server, one as master, other as slave. i install Openvz container "zenoss" on master and migrate it to slave. tra..la.... i got that error.
 
i did not configure any NFS service. i dont know if proxmox does.

my situation here is:
i install two fresh install server, one as master, other as slave. i install Openvz container "zenoss" on master and migrate it to slave. tra..la.... i got that error.

Maybe I missed something, but if you read the whole post, you will see that Proxmox just doesn't support online migration for now. You have to wait for the mentioned bugs to be fixed in the OpenVZ kernel module.

As has also been mentioned, pausing and resuming the VM to do the migration is pretty much the same anyway ( there is still a slight delay, even when online migration works)
 
yes, a known issue on OpenVZ Kernel 2.6.24.

see
http://bugzilla.openvz.org/show_bug.cgi?id=850
http://bugzilla.openvz.org/show_bug.cgi?id=881


the more people push the OpenVZ team to fix this probably the sooner they fix it.
in the meantime, just do offline migration, this is just a few seconds downtime.

I clicked on those two links. The status shows those bugs are now fixed. But I just tried to do an online migration using Proxmox ve 1.8 it still failed with the same error.
 
Thanks Tom. But since I'd like to take advantage of the latest KVM, I'd rather use the latest kernel version. If I have to turn off my OpenVZ containers before migrating them I can live with that.
 
ok. just to note, our 2.6.18 kernel is based on the current RHEL 5.x series, that means a lot of KVM features are backported.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!