Rebuild and recover from backups failed

special_case

New Member
Mar 9, 2022
12
1
3
49
My proxmox install used two drives in a zfs pool as the storage system, but one drive was starting to fail. I bought a new, larger drive, backed up the VMs and copied /etc/pve. I swapped out the dying drive for the new one, and reinstalled proxmox. Then things went south.

I added the second drive to an LVM pool with the first drive. I copied the old /etc/pve/* files back onto the new system and then tried qmerestore to restore the old VMs. This gave me errors that “local-zfs” storage doesn’t exist (and that the VMs already existed). That was the old storage setup which was pooled zfs. So I used the qmerestore ‘—force 1’ and ’—storage local’ flag. This got the restore to work, but the VMs failed to boot.

So this is where I’m stuck. I don’t understand the proxmox storage options well enough to regenerate the system exactly as it was previously, nor, alternately, how to manage qmerestore in a system where the storage has been changed up.

  1. Given two 1TB SSDs, what is the best practices for setting my system up? Pool them with LVM or ZFS for flexibility? Keep them separate so that the VMs and local backups can live on different physical drives for robustness?
  2. How do I restore the VMs from backups on this system? Should I try to replicate the original storage configuration, or is there a way to restore the VMs onto an updated storage setup?


If it helps, this is my original /etc/pve/storage.cfg:

dir: local path /var/lib/vz content iso,backup,vztmpl prune-backups keep-daily=4,keep-hourly=4,keep-last=4,keep-monthly=4,keep-weekly=4,keep-yearly=4 shared 0 zfspool: local-zfs pool rpool/data content images,rootdir sparse 1 pbs: remote_bkp # …
 
My proxmox install used two drives in a zfs pool as the storage system, but one drive was starting to fail. I bought a new, larger drive, backed up the VMs and copied /etc/pve. I swapped out the dying drive for the new one, and reinstalled proxmox. Then things went south.
I assume you had a RAID0 / stripe of the two drives and did what you did because of that?
 
I added the second drive to an LVM pool with the first drive. I copied the old /etc/pve/* files back onto the new system and then tried qmerestore to restore the old VMs.
It's better to not overwrite the /etc/pve files of the new installation. Just set the new Proxmox installation up manually (and use the copy of the previous /etc/pve files to help your memory) and restore the VMs from their backups.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!