Reducing migration time for LXC container

jinjer

Renowned Member
Oct 4, 2010
204
7
83
With OpenVZ containers, we had two-step migration process, where on step 1 there was an initial rsync with a running container. Then the container was stopped, rsynched again and then started on the new node.

This would shorted migration downtime by an order of magniture on big containers (millions of small files like web servers).

Now this is gone. You have to stop, migrate and start.

By leveraging dual rsync it would take much less on local storage.
By leveraging zfs snapshots, it would take a few seconds.

Is there any document describing how migration works "from inside", so I can try to implement it, or are you already working on it and can I help to speed up the implementation process ?
 
When proxmox is released with next LXC-LTS which is LXC-2.0 you will be able to live migrate LXC containers via checkpoint restore (CRIU). This feature has been available since LXC-1.1. Lets hope they (LXC and CRIU devs) by the time of LXC-2.0 release they have solved the last big problem for HA which is that currently you cannot use CRIU if the container's root is mounted on shared storage.
 
When proxmox is released with next LXC-LTS which is LXC-2.0 you will be able to live migrate LXC containers via checkpoint restore (CRIU). This feature has been available since LXC-1.1. Lets hope they (LXC and CRIU devs) by the time of LXC-2.0 release they have solved the last big problem for HA which is that currently you cannot use CRIU if the container's root is mounted on shared storage.
I do not plan to use shared storage for LXC containers at all because of the speed limitations of shared storage in my usage scenario. Also live migration is nice but not necessary.
What I am trying to obtain is the least possible downtime when migrating services from one node to the other. A copy of 2 hours is out of question when it can be done in less than one minute with appropriate methods. The tools needed were available and used from proxmox in the previous releases. It was done properly in 3.x, 2.x and 1.x (if memory serves me).

Can I offer my help to "fix" this ?
 
I had a dev env in the past and did ploop support for openvz which was not accepted. This is why I need to get a clear go from you to do dual-stage migration for LXC container on local storage.

I am not sure if there is a way to keep the LXC container running on the first rsync, then stop, the sync again, then migrate config and then start on new node.

Can you confirm that this type of operation is indeed desired ?
 
Last edited:
Bump.
Its an important issue. I have containers almost a terabyte in size. In openvz, after initial sync backup was quick. Hows with new LCX single *.raw files??? Are managing such CT are PITA? How to practically manage those? Transfering 1T file between hosta are out of question.
 
Hi,

sorry for reanimating of this old thread but I have exactly same problem. With openvz the migration to new HW was easy and quick even with big containers. With lxc container migration is horrible task. I need migrate container from one OVH soyoustart server to another one so I cannot use NFS, or external storage etc. I cannot use zfs because of quotas. Is there some progress with this problem?

Thank you in advance.
Jan
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!