[SOLVED] migrating lxc container

mxwx

New Member
May 22, 2024
8
0
1
Hi,

I have some lxc container to migrate. Some of them have the "newer" unprivileged format with mapped users, some have no mapped users - I believe these are privileged. Can someone confirm, that when I mount a .raw file and I see non mapped user (root, etc) that this is a privileged lxc-container and when I see mapped user (100000, ...) that this is a unprivileged container?

If that is the case, is there a diffence when migrating containers to another server manually (I am not able (as of now) to migrate by the web-gui). Right now I can "migrate" by copying the conf and the raw file with ease when the container has a "unprivileged: 1" parameter in its conf file (what is even corresponding to mapped users when mounting the raw file manually).

On the other hand I ge ta disfunctionally container when "migrating" manually a conf and raw file where there is no (missing) parameter concerning "unprivileged". I believe, that this containers are "privileged" as of the missing "unprivileged" parameter in the conf file.

How can I "copy" such a probably privileged container and get it running on another server.

Thanks
 
Last edited:
I had ha cluster and working replication but that is gone :-(. All I cuurently have are the shared raw files of the containers and its conf files. Some of them I have successfully "migrated" to another server by simply copying conf and raw files and they are running again. What doesnt work well right now are the, I believe privileged, containers. Is there a way to simply copy them and get it running again manually without the migration stuff from the web ui - at least there must be a way, as the web ui in the background will even do it in a reproducable way.
 
look at the command "qm remote-migrate". maybe this can help you, but ithink its only for vms.
 
Last edited:
Thanks for your answer. I even think so that qm... ist for kvm but plc is for lxc containers. When I try to use the plc tool to migrate containers that are not locally started last time, it tells me, that there is no local container to migrate. The question for me is how to "migrate" or better simply copy the persistent data I have (conf and raw files) in a way that it can be startet by another server.
 
U can manually copy the conf and raw file via SSH.
sure, but that results in a non functioning container. This way I have "migrated" some unprivileged container successsfully. Privileged container seams to be different. What I see is, that unprivileged container seem to map user to "random" users like 100000 for user 0 (root) but privileged container obviously (or at least at my servers) haven't "mapped user" as they are 0 for root and so on. As of that I cant get privileged container running correctly anymore after migrating them by copying conf and raw file. The container will startup from the web gui in general and I can ping them but I cant log in anymore by shell. When using "pct enter" instead I get a partially functional shell that comes up constantly with permission problems. I can see user:group mostly are root:root - as they shoud be, but there is something very strange, as I get permanantly permission (probably acl) problems.
 
For clarity - I had a cluster before that is fucked up nearly completely. I am just "migrating" to a standalone server for an interim solution to start up from there again. On one of the cluster servers I have access to the clusters /etc/pve . From there I have even tried to move some conf files (the ones from that evil privileged containers :-)) out of one of the /etc/pve/ subdirs to another one trying to start the container at one of the still running cluster servers. That even dosn't work as I get a message saying that the conf file is existing at the target location (what is not showing when using ls). There is some magic that pretends files location. Well, maybe that could help too if soneone can tell me how I can "move" such magically stucked conf files to another location on a cluster server so I cant start them on the sad rest of my cluster. But, the final destination has to be the new standalone server from where I will start again. The old cluster servers all will go to the shredder - mainly for educational reasons to learn not to bother me anymore ;-).

Lets say one of my contaiers I want to move was running on the completely lost pve01 and I cant startup pve02 (formerly both in cluste01). When starting up pve02 I can see the container lxc001 which was running before the crash at pve01. Now I want to move lxc001 to pve02 and start it there. pve02 sees lxc01 still on not reachable pve01 and refuses to start the container at pve02. Even trying to migrate lxc01 to pve01 is not working as pve02 web gui says that lxc01 cant be migrated.

What can one do in such a situation.

(all unprivileged contaier are move with easy but the privileged one are bothering me alot)
 
and of course u have no backup :cool:
Actually I have more than enough of backups and they even are working. But that doesn't helped me in my situation at all - it was a real disaster that wasn't cureable with backups. Maybe it would have been helpful if I had a ha-cluster at different locations. I had a direct lightning strike to my server farm - in ssuch situations you can put a backup where the sun never shines, you surely have other problems than a f.....g backup.

But by the way it is always funny that people that don't want or can't help asks at some point if there is any backup. Well, that doesn't help much in general nor does it looks like one is even competent in solving problems. Even if I wouldn't have backups it makes not much sence asking for them other than probably having fun on my neck in absence of such things. What would makes a difference concerning my questions if I haven't any backup. I even didn't need the backups till now, as I have told, my conf and data files are existing and I have access to them. My questions are concerning obviously other fields than some dump questions about best practice bullshit.

I actually came back right now to tell the community that I solved my problem and have all missing containers running in the meantime - and to support the community in telling how I fixed my problems. As of now I think I will stay with the information that such situations are solvable. Who ever need some constructive and competent support concerning this question may try to reach me here ... and if I am still an active member I will try to find my records and give proper and compentent support individually.

All other that were interested in my problem und tried there best to help me solving it by thinking about it, even if they can't really help, I shout out a warm thank you. As you can imagine, I had been in big trouble - now everything is on a good way and I see light at the end of the tunnel (as we say in germany).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!