The following is for entertainment purposes only. Do not attempt to follow my direction. I am a disgusting noob.
Thought I would writeup the restoration of my Proxmox server, specifically the process of retrieving my VMs without any backups. The lack of backups was simply foolish. I live too optimistically, and had no failure plan in place. But for some reason my Plex server, which is a VM on Proxmox was available one day and gone the next. Noticed I couldn't hit my Home Assistant page either, and even Proxmox was unavailable to the network. A reboot of the box yielded no fruit. So I attached a monitor and powered up the server. "No boot disk available."
Absolutely no clue what happened, but I set to work on making a new server. I had nearly identical hardware just laying around and installed a fresh Proxmox on a different drive in the new box. I then added the old server drive to the new Proxmox, which initially stalled the booting process when it complained about two zfs rpools available. I had to tell Proxmox to grab the new drive, and leave the old drive alone.
With Proxmox loaded, most of the restoration happened in CLI.
First I had to import the old drive to be able to handle the zfs pools. My dad's the pro, he sent me some links, and had to walk me through the first VM restore.
https://docs.oracle.com/cd/E36784_01/html/E36835/gbdbk.html
The import syntax was off, this is what worked for me:
zpool import -R /a -t -F 11468437000666441006 altrpool
With the string of digits pointing to the proper drive id listed from the command: zpool import
zfs list
Showed the old vms in altrpool/data/, and there were some subvols there also, which appearantly are the lxc containers.
https://forum.proxmox.com/threads/vm-drive-recovery.121372/
This forum post was where the magic happened. My dad linked me that page also, but big thanks to MrPilote and Dunuin who participated in the thread. The final post is where the secret sauce is. I won't try to walk you through the VM restoration, as I can't possibly do it any better than MrPilote.
The odd thing for my dad and I, was the lack of the /etc/pve subfolders. /etc/pve was empty, so no VM conf files to help with the restoration. The one most precious to me was Home Assistant which was heavily modified. Plex was easy to rebuild from scratch. I knew the VM # of HA, so we just used MrPilote's method on that, after creating a new VM with what I believed the basic build for HA was. Then we put the altrpool/data/vm disks over to rpool/data and launched Home Assistant. Success!
Another custom VM I wanted was actually a container.
Dad added this thread for my reading: https://forum.proxmox.com/threads/recover-data-from-failed-lxc-container.118428/
While not exactly what I was looking for, it pointed to /etc/pve/lxc for the container conf files. The emptyness of /etc/pve seemed odd. And a bit of research seemed to reveal that /etc/pve is empty, until Proxmox is up and running. Well pve-cluster service specifically.
https://pve.proxmox.com/pve-docs/chapter-pmxcfs.html#_recovery
This page, the recovery chapter specifically gave me an idea how to get my confs back. I made a Proxmox VM. That is a nested Proxmox installed inside Proxmox. Then I copied the config.db files from /a/var/lib/pve-cluster/ to my new nested Proxmox. I also modified the /etc/hostname and /etc/hosts files to mimic the ones from my old server, which were at /a/etc/hostname and /a/etc/hosts.
Following that, a reboot yeilded a fully populated pve folder with my lxc folder and qemu-server folder full of VM.conf files. Horrah!
Then it was mostly just copying a 104.conf file from the old server drive, to the new server. Using MrPilote's snapshot + send/receive command to copy the drives, and I was able to rebuild my VMs.
The one container was a little different, in that the snapshot command didn't work. I tried just the send/receive, and it complained about the subvol being mounted. I ran a zfs umount altrpool/data/subvol-102-disk-0 command, then the send/receive trick worked fine.
I'm not sure if any of the gibberish above makes sense to anybody but me. But perhaps it will help someone else in the future.
Thought I would writeup the restoration of my Proxmox server, specifically the process of retrieving my VMs without any backups. The lack of backups was simply foolish. I live too optimistically, and had no failure plan in place. But for some reason my Plex server, which is a VM on Proxmox was available one day and gone the next. Noticed I couldn't hit my Home Assistant page either, and even Proxmox was unavailable to the network. A reboot of the box yielded no fruit. So I attached a monitor and powered up the server. "No boot disk available."
Absolutely no clue what happened, but I set to work on making a new server. I had nearly identical hardware just laying around and installed a fresh Proxmox on a different drive in the new box. I then added the old server drive to the new Proxmox, which initially stalled the booting process when it complained about two zfs rpools available. I had to tell Proxmox to grab the new drive, and leave the old drive alone.
With Proxmox loaded, most of the restoration happened in CLI.
First I had to import the old drive to be able to handle the zfs pools. My dad's the pro, he sent me some links, and had to walk me through the first VM restore.
https://docs.oracle.com/cd/E36784_01/html/E36835/gbdbk.html
The import syntax was off, this is what worked for me:
zpool import -R /a -t -F 11468437000666441006 altrpool
With the string of digits pointing to the proper drive id listed from the command: zpool import
zfs list
Showed the old vms in altrpool/data/, and there were some subvols there also, which appearantly are the lxc containers.
https://forum.proxmox.com/threads/vm-drive-recovery.121372/
This forum post was where the magic happened. My dad linked me that page also, but big thanks to MrPilote and Dunuin who participated in the thread. The final post is where the secret sauce is. I won't try to walk you through the VM restoration, as I can't possibly do it any better than MrPilote.
The odd thing for my dad and I, was the lack of the /etc/pve subfolders. /etc/pve was empty, so no VM conf files to help with the restoration. The one most precious to me was Home Assistant which was heavily modified. Plex was easy to rebuild from scratch. I knew the VM # of HA, so we just used MrPilote's method on that, after creating a new VM with what I believed the basic build for HA was. Then we put the altrpool/data/vm disks over to rpool/data and launched Home Assistant. Success!
Another custom VM I wanted was actually a container.
Dad added this thread for my reading: https://forum.proxmox.com/threads/recover-data-from-failed-lxc-container.118428/
While not exactly what I was looking for, it pointed to /etc/pve/lxc for the container conf files. The emptyness of /etc/pve seemed odd. And a bit of research seemed to reveal that /etc/pve is empty, until Proxmox is up and running. Well pve-cluster service specifically.
https://pve.proxmox.com/pve-docs/chapter-pmxcfs.html#_recovery
This page, the recovery chapter specifically gave me an idea how to get my confs back. I made a Proxmox VM. That is a nested Proxmox installed inside Proxmox. Then I copied the config.db files from /a/var/lib/pve-cluster/ to my new nested Proxmox. I also modified the /etc/hostname and /etc/hosts files to mimic the ones from my old server, which were at /a/etc/hostname and /a/etc/hosts.
Following that, a reboot yeilded a fully populated pve folder with my lxc folder and qemu-server folder full of VM.conf files. Horrah!
Then it was mostly just copying a 104.conf file from the old server drive, to the new server. Using MrPilote's snapshot + send/receive command to copy the drives, and I was able to rebuild my VMs.
The one container was a little different, in that the snapshot command didn't work. I tried just the send/receive, and it complained about the subvol being mounted. I ran a zfs umount altrpool/data/subvol-102-disk-0 command, then the send/receive trick worked fine.
I'm not sure if any of the gibberish above makes sense to anybody but me. But perhaps it will help someone else in the future.