[SOLVED] Recover LXC Image from LVM-Thin Volume

Ehcks0

New Member
May 13, 2019
2
0
1
23
Hi everyone,

After a botched removal of a node from a cluster, I'm left with no configs but several lxc disk images in an lvm-thin volume. Is there any way for me to recover it or make a mock lxc config to get those up and running? I do not remember the drive size as the last time I was on the config for the containers was months ago.

If it's not possible to boot from the disk, is there at least a way to get files from it by attaching it to another container/vm?

I've tried the conversion to qcow2 and boot but that had not worked.

My apologies for being vague, I'm just not quite sure where to go and I will gladly provide any logs/config/etc. that's needed to get this resolved.

Thanks guys!
 
Hi,
what exactly went wrong during the node removal? You can check if the configs are still present under /etc/pve/lxc/. If that is not the case you can check if you still see the disk in the lvmthin storage pool on your node.
Also, you can check your disks logical size in the output of `lvs`. Then you can try to attach them to an other VM/CT or recreate a similar CT config as you had before.
 
Hi again,

The node had still shown the old nodes in the datacenter pane but one was greyed out and the other had a red X next to it. Both were offline and no longer in the cluster. When I would try to create a vm/container, the host dropdown would be empty.

EDIT: The 2 removed nodes mentioned had been removed by following the guide on the ProxMox wiki.

I followed another node removal guide which first backed up the existing configuration, then went to remove it. Upon trying to put parts of the original config back into the clean one, the VMs would not appear, and when manually creating the VMs and swapping the drives in their configurations, they would fail to start. At this point, I opened this thread.

As of this morning I had tried once more to attach to a newly-create VM/CT to boot from the disk, but that had failed. (Your suggestion of the lvs command did display the drive size that I needed) My next attempt is listed below and has successfully allowed me to reach the files I needed.

I had resolved the issue in quite a roundabout way, but these were the steps I took:
  1. Converted the raw image to a qcow2 image, outputting the file on an NFS share of another server (Did not want to use the lvmthin drive and there was not enough space on the local lvm drive)
  2. Create a VM and store its data on the NFS share
  3. Copy the exported qcow2 drive to the new VMs folder and modified the config so it was included on the VM
  4. Detached the drive that was created in the VM setup and attached the qcow2 image
  5. Attached a SystemRescueCD iso and booted to it
  6. Ran fdisk -l to see the qcow2 image was indeed connected (was connected as /dev/sda)
  7. Attempted to mount the image (mount /dev/sda /mnt/recovery) and this had failed claiming that it was a bad filesystem
  8. Ran fsck.ext4, which had found issued but not fixed them. Suggested running the command below
  9. Ran 'tune2fs -f -E clear_mmp /dev/sda' which found and fixed some errors
  10. Ran fsck.ext4 once more, and it had found no errors
  11. Attempted to mount /dev/sda once more, and I was able to access all the files I needed

Thanks for the advice!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!