openVZ on any kind of shared storage in pve 2.2

mo_

Renowned Member
Oct 27, 2011
401
7
83
Germany
Hi,

I collected some quick questions regarding storing containers on some kind of shared storage.
Oddly enough information about this topic is usually very much outdated (as in: for 1.x).


1.) Most importantly: Does proxmox offer any kind of live migration for openvz containers that does NOT need to rsync the disk content via network (meaning: uses some sort of shared storage)? The way I understand it proxmox/openvz would still use rsync even if some sort of shared storage was used for openvz?

2.) I can see that the openvz live migration process deletes the container's files on the node1 when migrating node1->node2. Would it be possible/reasonable to have an optional switch somewhere that would result in those files STAYING on the node? If you did that, the migration BACK (node2->node1) would be faster because rsync only needs to transfer files with a different "last modified" timestamp.

I understand that this would be a trade-off: you'd save migration time at the cost of more required storage space.
HOWEVER: Considering that you need to be able to migrate the container back anyway, the storage on node1 needs to have enough space anyway. Yes, theres a difference between needing to keeping X amount of space free and constantly occupying X amount of space with old data, but wouldnt this be a good idea?
 
1.) Most importantly: Does proxmox offer any kind of live migration for openvz containers that does NOT need to rsync the disk content via network (meaning: uses some sort of shared storage)?

Yes, you can use NFS.

2.) I can see that the openvz live migration process deletes the container's files on the node1 when migrating node1->node2.

That is currently not supported.
 
Hello and thanks for the reply on this day.

Yes, you can use NFS.

Does that mean that proxmox initiates a different live migration (LM) process when a NFS share is used for the containers? Shouldnt the same procedure be possible for DRBD-replicated containers (using OCFS2/GFS) then?
 
2.) I can see that the openvz live migration process deletes the container's files on the node1 when migrating node1->node2. Would it be possible/reasonable to have an optional switch somewhere that would result in those files STAYING on the node? If you did that, the migration BACK (node2->node1) would be faster because rsync only needs to transfer files with a different "last modified" timestamp.

Interesting idea. Basically this would only require the very same migration script that is now being used -minus the "rm -Rf" part that now removes the old VM.
This should be doable with just removing that "delete" part and calling the new script "Clone Container" :)

I like it. It would make OpenVZ "template" creation much more flexible.
 
Last edited:
Actually... while researching some other idea, I stumbled upon the hint, that the vzmigrate "binary" is merely a shell script, so implementing this shouldnt prove too difficult. I will probably be asked to either implement this or another modification to said script next week, so I should have more details about it then.

But your suggestion to use this as a means to clone a container in its current state is very interresting one too...
 
Last edited:
Actually... while researching some other idea, I stumbled upon the hint, that the vzmigrate "binary" is merely a shell script, so implementing this shouldnt prove too difficult.

In fact, that is already implemented in 'vzmigrate'. The bad news is that we do not use 'vzmigrate' on proxmox - we have our own script do do migration.
 
In fact, that is already implemented in 'vzmigrate'. The bad news is that we do not use 'vzmigrate' on proxmox - we have our own script do do migration.

took a look at the openvz migration.

looking at the ps output during a live migration tells me:

root 338755 338671 0 15:49 ? 00:00:00 task UPID:***:00052B43:0A3BE26B:50EC31FC:vzmigrate:400:root@pam:
root 338766 338755 12 15:49 ? 00:00:04 /usr/bin/rsync -aHAX --delete --numeric-ids --sparse *SNIP*

so apparantly the pvedaemon worker does in fact call vzmigrate for the migration. this observation is reenforced by the rsync parameters in the second line. those are the exact parameters specified in vzmigrate: RSYNC_OPTIONS="-aHAX --delete --numeric-ids"

Before I look further into this, where can I find the proxmox script that does the migration and calls vzmigrate in the process? (this is what I had hoped to figure out from the ps in the first place)
 
Interesting idea. Basically this would only require the very same migration script that is now being used -minus the "rm -Rf" part that now removes the old VM.
This should be doable with just removing that "delete" part and calling the new script "Clone Container" :)

I like it. It would make OpenVZ "template" creation much more flexible.

Yes it does. So here's my journey.

Tried to find out which script did the migration
After lots of grep'ing found /usr/share/perl5/PVE/OpenVZMigrate.pm
googled the filename in hopes of finding it in a repository somewhere with syntax hilighting or whatever
instead, found a thread from april explaining how to EXACTLY what I was about to do
rejoiced
tested
works! awesome

The thread explaining how to do it: http://forum.proxmox.com/threads/9305-VZ-migrate-speed-up-rsync-if-you-have-enought-space (Note: the line numbers arent 100% accurate anymore, but its within 5-10 lines, easy to find)

Btw, you actually dont need to use this to clone containers. All you have to do to create a template from an actual container is:

make sure the container in question is not running - OR - (here's where the modification comes into play: ) migrate the container vom node1 to node2 and use the "remains" of the container on node1
cd into the containers structure (i.g. /var/lib/vz/private/666/)
simply issue tar -czvf /vz/template/cache/<NAME>-<DISTRO>-<ARCH>.tar.gz ./ pay attention to the naming convention. basically: NAME and DISTRO can be whatever you want it to be, ARCH should be amd64 or in case of some legacy container: i386
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!