from 3node to standalone. lost config.db

DigitalLF

New Member
Sep 11, 2024
11
1
3
Hi all!

I really messed up here. First i moved all VM/LXC to "PVE-1" from "PVE-2" and "PVE-3". Then I converted my 3 node cluster to standalone's and with that i lost my config.db..

I now need to figure out how to recover my VM's/LXC's.

I fully understand it will be quite a lot of work. All of them wore on 4 different VLAN's and also had some hardware Passthrough like for unraid I had a HBA Passthrough and a GPU Passthrough.

The first thing I would like to recover would be HAOS (Home Assistant). I know HAOS was on VLAN 102 and "unifi controller" was on VLAN 101. None of these had hardware pass-trough. But later on I will have to recover unraid with HBA Passthrough.

But with the help of "lvs" i see vm-100, vm-101, vm-1000, vm-105, vm-109, vm-155. I have forgotten the ID's.

How can I recreate and try to figure out what is what? This feels a bit overwhelming.
 
Last edited:
If you can still see the VM disks in lvs, then your situation is actually not that bad. It sounds like you only lost the configuration in /etc/pve, but the actual disks are still intact.

In that case, one practical approach is to recreate the VMs manually and reattach the existing disks.

For example, create a new VM with a temporary ID (just something unused), then instead of creating a new disk, attach one of your existing volumes (like vm-100-disk-0) to it. Once attached, try booting it and see what comes up. That’s usually the fastest way to identify which disk belongs to HAOS, Unraid, Unifi, etc.

Rough example:

qm create 9000 --memory 4096 --cores 2 --net0 virtio,bridge=vmbr0
qm set 9000 --scsi0 <storage>:vm-100-disk-0
qm start 9000

If it boots into Home Assistant, you’ve found it. If not, stop it and try the next disk.

Disk size can also give you hints (HAOS is usually relatively small, Unraid typically larger).

Once you’ve identified each VM, you can then properly recreate them with the correct VMID, VLAN tag, and passthrough settings.

It’s definitely some manual work, but as long as the volumes are still there, this is recoverable.
 
  • Like
Reactions: DigitalLF
If you can still see the VM disks in lvs, then your situation is actually not that bad. It sounds like you only lost the configuration in /etc/pve, but the actual disks are still intact.

In that case, one practical approach is to recreate the VMs manually and reattach the existing disks.

For example, create a new VM with a temporary ID (just something unused), then instead of creating a new disk, attach one of your existing volumes (like vm-100-disk-0) to it. Once attached, try booting it and see what comes up. That’s usually the fastest way to identify which disk belongs to HAOS, Unraid, Unifi, etc.

Rough example:



If it boots into Home Assistant, you’ve found it. If not, stop it and try the next disk.

Disk size can also give you hints (HAOS is usually relatively small, Unraid typically larger).

Once you’ve identified each VM, you can then properly recreate them with the correct VMID, VLAN tag, and passthrough settings.

It’s definitely some manual work, but as long as the volumes are still there, this is recoverable.

I had a backup of Home Assistant OS so i just did a new one. but when i run LVS they are all still there. but i have not been able to recover my Ubiquiti Controller and my backup was 6 months old but i did install the new UniFi Server OS as a VM instead and restored the backup there.

it also seems like i can't connect the vm disk to the machines so i think i'm going to accept my losses..