migrating lxc containers from crashed HA server

mxwx

New Member
May 22, 2024
8
0
1
Hi together,

I am facing a crashed ha config with mainly two servers. One of the servers ist crashed. I have managed to startup the second server for what i have to messup the ha-config (corosync) abit. Now I am trying to migrate the lxc and kvm containers to the second server for now till the new hardware arrives.

Is there a common way to "free" the stucked containers from pve01 and migrate it to pve02.

I can access the /etc/pve/nodes directory and even can see the config files. Images and some other data is stored on a still working nfs share. If I try to simply copy config files from /etc/pve/nodes/pve01 to /etc/pve/nodes/pve02 I get a permission erros.

If I create a new lcx container on pve02 (regardless if I use the old MAC-address and/or IP or not) and copy the raw-image to the new container, I can startup the container but not getting a working ssh connection and even so cant login via the proxmox console. I get a login promt in the console but the former username(s) and password(s) are not working anymore. I believe that no shell is working at all or at least sshd ist not starting up correctly. By the way, I can ping the machines as of that the network-interfaces seems to be working in general.

Does anyone know how to "free" the stucked containers from pve01 and restart it at pve02 ... or at least what is the prefered way to mount the raw files in another container to have access to the data inside the container.

Below is a partial screenshot of how it looks currently in webgui.

Thanks in advance

Michael


ps01.png
 
Last edited:
I believe I am abit closer to a solution for my problem.

Currently I switche my strategie. I have running a interim server I will migrate the containers to - this one is not configured with HA.

Does anyone know where I can see what user id is mapped to what user id in unprivileged containers that is somewhat more specific than:

https://pve.proxmox.com/wiki/Unprivileged_LXC_containers where it is stated: All of the UIDs (user id) and GIDs (group id) are mapped to a different number range than on the host machine, usually root (uid 0) became uid 100000, 1 will be 100001 and so on

As it looks to me, containers in a ha surrounding aren't mapped. At least my container-files are stored with "normal" user-id like 0 for root and so on. If I try to copy and start with such a .raw file I get the problems described above. To get that fixed I would try to copy solely some data from my "old" containers to new ones to start them up.

For that I woiuld like to know where I can get the information which user is mapped to.

Thanks
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!