Proxmox not booting anymore after motherboard swap

I don't seem to be able to recover the data this way... any other ideas that I might try? I'm getting really hopeless by now.... That being said, I even think that the ZFS pool is corrupted already due to the different OS boots.... My hope of geting the old system workin again is really low by now..
If you can get a fresh new Proxmox installation running on another (new?) drive in that PC (temporarily remove the current drives to preserve their contents), then you could see if the virtual disks of the VMs are found on your current drives. This will also confirm whether the drive has problems of if it is indeed a configuration issue. If the data is still there, you (or someone else) can probably guess the VM configs (as they are all quite similar).Maybe this won't work and you cannot recover the old VMs, but at least you will have a new working Proxmox.

In short: installing a fresh Proxmox will at least give you a working Linux with ZFS support to start investigating what can be salvaged from the old installation.
 
If you can get a fresh new Proxmox installation running on another (new?) drive in that PC (temporarily remove the current drives to preserve their contents), then you could see if the virtual disks of the VMs are found on your current drives. This will also confirm whether the drive has problems of if it is indeed a configuration issue. If the data is still there, you (or someone else) can probably guess the VM configs (as they are all quite similar).Maybe this won't work and you cannot recover the old VMs, but at least you will have a new working Proxmox.

In short: installing a fresh Proxmox will at least give you a working Linux with ZFS support to start investigating what can be salvaged from the old installation.

I have given up by now and I will just reïnstall the boot drive, I'll see if I still can have the vms recoverd there or not, I just gave up.
 
I have given up by now and I will just reïnstall the boot drive, I'll see if I still can have the vms recoverd there or not, I just gave up.
In a way, that is almost exactly what I suggested.
Can you import the previous ZFS pool (from the non-boot drive) that contained VMs? You probably get a warning about it being in use by another system, which is fine.
Do you see virtual disks like name-of-the-pool/vm-100-disk-0 or similar? If that's the case, we can manually write a /etc/pve/qemu-server/100.conf (or similar) that uses that disk.
 
In a way, that is almost exactly what I suggested.
Can you import the previous ZFS pool (from the non-boot drive) that contained VMs? You probably get a warning about it being in use by another system, which is fine.
Do you see virtual disks like name-of-the-pool/vm-100-disk-0 or similar? If that's the case, we can manually write a /etc/pve/qemu-server/100.conf (or similar) that uses that disk.

Okay, so I reinstalled the PVE on the disk (I also had a faulty disk that spitted out errors, thus removed that drive, it was as extra storage, but I didn't use it tho)

Now, the VM storage is mounted again (was lvm thin) and I can see the following disks;

ZGDjDvi.png


So that should be a really good sign no?


However, I can't seem to "rebuild" or get my zfs pool, how should I do this?
 
Okay, so I reinstalled the PVE on the disk (I also had a faulty disk that spitted out errors, thus removed that drive, it was as extra storage, but I didn't use it tho)

Now, the VM storage is mounted again (was lvm thin) and I can see the following disks;

ZGDjDvi.png


So that should be a really good sign no?


However, I can't seem to "rebuild" or get my zfs pool, how should I do this?
My mistake, sorry. I thought you were using ZFS for your VM storage.
Good to see the disk images appear in the Proxmox GUI, it probably means that the disks of the VMs are fine.

Now you need to recreate the 100.conf, 102.conf, etc. VM configuration files. For example:
1. Make a new VM (number 200) with 1 single (small) disk om vm_storage, which looks a much like your VMs as you can remember but without anything special/VM-specific;
2. Copy /etc/pve/qemu-server/200.conf to /etc/pve/qemu-server/100.conf;
3. Change vm-200-disk-0 in /etc/pve/qemu-server/100.conf to vm-100-disk-1 (maybe change the size parameter also), using nano on the command line;
4. See if you can start VM 100 and if the data on the virtual disk is correct (or maybe do step 5 first);
5. Make other changes to /etc/pve/qemu-server/100.conf if you think it is necessary to get the VM running again. You can use the Proxmox GUI to change the VM configuration if you want.
6. Repeat steps 2 through 5 for the other VMs that you used to have.

Best of luck!
 
  • Like
Reactions: VGE

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!