[SOLVED] Migrating a LXC ceph mountpoint to a VM

JoaGamo

Member
Jan 11, 2023
9
2
8
Hello! I have a LXC that's on a (deprecated by ourselves) erasure coding pool. I want to move their 20TB mountpoint to a SSD erasure coding pool, without taking down the container. First I tried to take down the container and do a migration, but this activity was taking longer than a week, so I'm considering Ceph Live Migration, which is a new feature in Squid. https://ceph.io/en/news/blog/2025/rbd-live-migration/

Problem is that, in Proxmox, it says my kernel is missing the Ceph live migration module. Is it because of this note in the article?

"Note: Linux KRBD kernel clients currently do not support live migration"

Can I force the librbd client on LXC? How? What are the implications, if any?
 
Last edited:
puh, overall, I would consider if switching everything over to a VM might not be the better option. Because then moving a disk image between any storage while the VM is running is no problem at all.
LXC containers cannot be live migrated to other nodes and need to be shut down if you do a move disk to a different storage.

If you try to force a move of the disk image behind the scenes on Ceph directly, be aware that this does not involve the Proxmox VE stack at all. Therefore, there could be even other issues. And you would definitely need to adapt the LXC's config to point the mount point to the storage that is configured for the other pool!

Sorry, but I don't have any experience regarding Ceph's RBD live migration.

How is the container configured? As in, is that one large mount point a dedicated one and not the rootfs one?
Because then you could try to create a VM and manually rename that RBD image of the container so that it belongs to the new VMs VMID. Then import it into the VM and mount it there as a second disk. It should be formatted with ext4 if it was created for a CT. This way, you still need to reconfigure the services and such in the VM, but you could potentially avoid the large downtime and only have a short one. Once it belongs to the VM, and services are running as expected, you can then do the Move Disk while the VM is running.
 
Last edited:
  • Like
Reactions: Johannes S
Hello, thanks for your reply. When I made this container 4-5 years ago, I did not expect to increase the mountpoint size this much (neither needing to migrate it one day, :) ), I need moving it to a VM. I thought that moving to a VM involved stopping the services, using rsync to move from LXC to VM, then starting the services on the VM as I found on an older post, which was not ideal for me.

The container is configured exactly like you mentioned, the 20TB is a separate mountpoint (mp0). the rootfs is a separate 8gb volume.

Sir, that was very smart. I did exactly what you said and it worked :). can't believe it was that easy. I didn't know the container images were ext4 formatted, but it makes sense now that i think of it.
Noting down my steps for reference:
1) I migrated all that was required from my services (config files, keys, etc) from this Ubuntu LXC to a new Ubuntu VM
2) open /etc/pve/lxc/ID--LXC-To-Move.conf
3) Find `mp0: deprecatedPool:vm-998-disk-0`, copy this string `deprecatedPool:vm-998-disk-0`
4) In the ubuntu qemu config (/etc/pve/qemu-server/ubuntuvmID.conf) add a new drive, be it sata1: or scsi1:, pointing to the image we copied earlier. Then as mentioned add this new drive to /etc/fstab and mount it as ext4.

What is left now is renaming the image with ceph tools neither proxmox or ceph is complaining, so right now im doing the live migration as I wanted. Thank you!
I changed the thread title to reflect this solution.
 
Last edited:
Cool that it worked!
Do rename the RBD image though to reflect the new VMID in the name! Otherwise this could have unintended side effects as Proxmox VE uses the name to decide to which guest a disk image belongs to! Worst case, you delete the container and enable the "remove unreferenced disks" option!
The changed name will need to be changed in the VMs config as well, of course :-)

I could have been a bit more verbose, but I would have done:

  1. create new VM and set it up as much as possible
  2. stop CT
  3. rename rbd image: `rbd -p POOL rename vm-OLDVMID-disk-X vm-NEWVMID-Y` Make sure the last number is still free.
  4. scan for unreferenced disk images: qm rescan --vmid NEWVMID It will show up as an unused disk.
  5. Configure unused disk to attach it to the VM
  6. finalize mounting it permanently
 
Last edited:
  • Like
Reactions: Johannes S