Help recovering backups from failed boot drive

audacity707

New Member
May 30, 2024
9
1
3
Hello,

I am recovering from a failed boot drive due to a power outage. My VMs and backups were on the boot drive; I plan to change this moving forward. I have a new Proxmox installed on a new disk and the old boot drive plugged in as well.

So far, this post has been very helpful, but I am stuck trying to get to the vz/dump folder of the old install.
https://forum.proxmox.com/threads/mount-old-disk-to-move-backups-over.132888/

I have renamed my old PVE logic volume groups and activated the old logical volumes from the post above. I can also add the old VMs via the GUI in datacenter > storage > add > LVM-thin. Where I am stuck is attempting to access the old backups.

I renamed the old logic volumes "oldpve". I can't get this directory to mount and show the backup files. Here is the command I used to mount the disk.

mount /dev/oldpve/root /mnt/bak

Can someone help me with the correct path to put in the datacenter > storage > add > directory, or is there another way to copy over the old backups from the /var/lib/vz/dump of my old pve install? Perhaps I can move them using the cp command in shell?
 
What does this show?
Code:
lsblk -o +FSTYPE

Post your output in CODE tags please
Also note in post which drive contains the backups
 
  • Like
Reactions: audacity707
sdf is the old boot drive which has the backups

Code:
root@pve:~# lsblk -o +FSTYPE
NAME                            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS FSTYPE
sda                               8:0    0 465.8G  0 disk             
├─sda1                            8:1    0  1007K  0 part             
├─sda2                            8:2    0     1G  0 part             vfat
└─sda3                            8:3    0 464.8G  0 part             LVM2_member
  ├─pve-swap                    252:0    0     8G  0 lvm  [SWAP]      swap
  ├─pve-root                    252:1    0    96G  0 lvm  /           ext4
  ├─pve-data_tmeta              252:2    0   3.4G  0 lvm             
  │ └─pve-data                  252:4    0 337.9G  0 lvm             
  └─pve-data_tdata              252:3    0 337.9G  0 lvm             
    └─pve-data                  252:4    0 337.9G  0 lvm             
sdb                               8:16   0   5.5T  0 disk             
├─sdb1                            8:17   0     2G  0 part             
└─sdb2                            8:18   0   5.5T  0 part             
sdc                               8:32   0   5.5T  0 disk             
├─sdc1                            8:33   0     2G  0 part             
└─sdc2                            8:34   0   5.5T  0 part             
sdd                               8:48   0   5.5T  0 disk             
├─sdd1                            8:49   0     2G  0 part             
└─sdd2                            8:50   0   5.5T  0 part             zfs_member
sde                               8:64   0   5.5T  0 disk             
├─sde1                            8:65   0     2G  0 part             
└─sde2                            8:66   0   5.5T  0 part             zfs_member
sdf                               8:80   0 931.5G  0 disk             
├─sdf1                            8:81   0  1007K  0 part             
├─sdf2                            8:82   0   512M  0 part             vfat
└─sdf3                            8:83   0   931G  0 part             LVM2_member
  ├─oldpve-swap                 252:5    0     8G  0 lvm              swap
  ├─oldpve-root                 252:6    0    96G  0 lvm              ext4
  ├─oldpve-data_tmeta           252:7    0   8.1G  0 lvm             
  │ └─oldpve-data-tpool         252:9    0 794.8G  0 lvm             
  │   ├─oldpve-data             252:10   0 794.8G  1 lvm             
  │   ├─oldpve-vm--100--disk--0 252:11   0    32G  0 lvm             
  │   ├─oldpve-vm--101--disk--0 252:12   0   128G  0 lvm             
  │   ├─oldpve-vm--102--disk--0 252:13   0     4M  0 lvm             
  │   └─oldpve-vm--102--disk--1 252:14   0    32G  0 lvm             
  ├─oldpve-data_tdata           252:8    0 794.8G  0 lvm             
  │ └─oldpve-data-tpool         252:9    0 794.8G  0 lvm             
  │   ├─oldpve-data             252:10   0 794.8G  1 lvm             
  │   ├─oldpve-vm--100--disk--0 252:11   0    32G  0 lvm             
  │   ├─oldpve-vm--101--disk--0 252:12   0   128G  0 lvm             
  │   ├─oldpve-vm--102--disk--0 252:13   0     4M  0 lvm             
  │   └─oldpve-vm--102--disk--1 252:14   0    32G  0 lvm             
  └─oldpve-grubtemp             252:15   0     4M  0 lvm
 
Code:
Found volume group "oldpve" using metadata type lvm2
 Found volume group "pve" using metadata type lvm2
 
Good, we're getting somewhere!

Doing an ls /mnt/bak/dump/ should show all your old backups.

Do not attempt to add these in PVE Storage (GUI etc.). Remember that drive sdf has failed already!

Use regular cp command to copy backups from /mnt/bak/dump/ to your present/current PVE backup location. After doing that they should show up in the GUI under the regular backups. You will be able to restore from there.


Once you've finished retrieving all the data you want from that sdf drive (you may have other data except for PVE backups), umount (spelled incorrectly on-purpose!) any mounts you may have made from that drive & wipe the disk. (Thats what I would do - in such a condition). After that you can probably use it again.
 
Last edited:
  • Like
Reactions: audacity707
Sorry something else to check, (I'm a little concerned that only the backups appear at that mount point).
What does this show:
Code:
mount | grep "bak"
 
  • Like
Reactions: audacity707
Thanks for the help so far!

Both commands aren't returning anything. I don't remember altering the default storage for backups, I set it up in the GUI under data center > backup. I actually had it running too much and it filled the partition right before the drive failed. I seem to have the VM data if I had to restore that route as well
 
If the command mount | grep "bak" returns nothing, then you can assume that the dirs/files listed there are local & not from a mount, but probably some other tinkering you have done.

So it would appear that you have yet to mount anything from the old sdf drive. Once you confirm this we'll try something else.

Also what does mount | grep "oldpve" show?
 
  • Like
Reactions: audacity707
There is nothing on that command either; I have tried a few things related to mounting and perhaps haven't mounted it correctly.
 
Code:
  ACTIVE            '/dev/oldpve/data' [794.79 GiB] inherit
  ACTIVE            '/dev/oldpve/swap' [8.00 GiB] inherit
  ACTIVE            '/dev/oldpve/root' [96.00 GiB] inherit
  ACTIVE            '/dev/oldpve/vm-100-disk-0' [32.00 GiB] inherit
  ACTIVE            '/dev/oldpve/vm-101-disk-0' [128.00 GiB] inherit
  ACTIVE            '/dev/oldpve/vm-102-disk-0' [4.00 MiB] inherit
  ACTIVE            '/dev/oldpve/vm-102-disk-1' [32.00 GiB] inherit
  ACTIVE            '/dev/oldpve/grubtemp' [4.00 MiB] inherit
  ACTIVE            '/dev/pve/data' [337.86 GiB] inherit
  ACTIVE            '/dev/pve/swap' [8.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [96.00 GiB] inherit
 
Ok. Good. Your old disks VGs are active.

Assuming your old backups are in the "default installation location" (which you say you didn't change) then let's try the following:

Code:
mkdir /mnt/obaks

mount /dev/oldpve/root /mnt/obaks

lsblk

ls /mnt/obaks
 
Last edited:
  • Like
Reactions: audacity707
I'm pausing to say thanks again for the help so far. I am learning a lot as we go. Here are the results.

Code:
NAME                            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                               8:0    0 465.8G  0 disk
├─sda1                            8:1    0  1007K  0 part
├─sda2                            8:2    0     1G  0 part
└─sda3                            8:3    0 464.8G  0 part
  ├─pve-swap                    252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                    252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta              252:2    0   3.4G  0 lvm 
  │ └─pve-data                  252:4    0 337.9G  0 lvm 
  └─pve-data_tdata              252:3    0 337.9G  0 lvm 
    └─pve-data                  252:4    0 337.9G  0 lvm 
sdb                               8:16   0   5.5T  0 disk
├─sdb1                            8:17   0     2G  0 part
└─sdb2                            8:18   0   5.5T  0 part
sdc                               8:32   0   5.5T  0 disk
├─sdc1                            8:33   0     2G  0 part
└─sdc2                            8:34   0   5.5T  0 part
sdd                               8:48   0   5.5T  0 disk
├─sdd1                            8:49   0     2G  0 part
└─sdd2                            8:50   0   5.5T  0 part
sde                               8:64   0   5.5T  0 disk
├─sde1                            8:65   0     2G  0 part
└─sde2                            8:66   0   5.5T  0 part
sdf                               8:80   0 931.5G  0 disk
├─sdf1                            8:81   0  1007K  0 part
├─sdf2                            8:82   0   512M  0 part
└─sdf3                            8:83   0   931G  0 part
  ├─oldpve-swap                 252:5    0     8G  0 lvm 
  ├─oldpve-root                 252:6    0    96G  0 lvm  /mnt/obaks
  ├─oldpve-data_tmeta           252:7    0   8.1G  0 lvm 
  │ └─oldpve-data-tpool         252:9    0 794.8G  0 lvm 
  │   ├─oldpve-data             252:10   0 794.8G  1 lvm 
  │   ├─oldpve-vm--100--disk--0 252:11   0    32G  0 lvm 
  │   ├─oldpve-vm--101--disk--0 252:12   0   128G  0 lvm 
  │   ├─oldpve-vm--102--disk--0 252:13   0     4M  0 lvm 
  │   └─oldpve-vm--102--disk--1 252:14   0    32G  0 lvm 
  ├─oldpve-data_tdata           252:8    0 794.8G  0 lvm 
  │ └─oldpve-data-tpool         252:9    0 794.8G  0 lvm 
  │   ├─oldpve-data             252:10   0 794.8G  1 lvm 
  │   ├─oldpve-vm--100--disk--0 252:11   0    32G  0 lvm 
  │   ├─oldpve-vm--101--disk--0 252:12   0   128G  0 lvm 
  │   ├─oldpve-vm--102--disk--0 252:13   0     4M  0 lvm 
  │   └─oldpve-vm--102--disk--1 252:14   0    32G  0 lvm 
  └─oldpve-grubtemp             252:15   0     4M  0 lvm 
bin   dev   etc   images  lib32  libx32      media  opt      proc  run   sed       srv  template  usr
boot  dump  home  lib     lib64  lost+found  mnt    private  root  sbin  snippets  sys  tmp       var
 
I'm back again.

So output is looking good now. We now see the old disk's LVM oldpve/root mounted on /mnt/obaks. We also see the ls /mnt/obaks output shows the old drives / output.

Your old backups should now be located in /mnt/obaks/var/lib/vz/dump/ . You can check by entering ls /mnt/obaks/var/lib/vz/dump/

The easiest method now (assuming your current sda's root has enough space) is to simply copy these backups to the current local storage. So if you want all these backups, enter the following:

cp /mnt/obaks/var/lib/vz/dump/* /var/lib/vz/dump/

Then you should be able to choose any backup in the GUI by choosing Node (left pane), local (left pane, further down), Backups (2nd pane), Choose the required backup in third pane, & finally press Restore button, check any settings such as VM id's, Storage you wish to use etc. & you should be good to retore the VM.

If for some reason, you don't want to copy over all the backup files to your current local storage (space constraints etc.), you could directly restore any backup from the CLI, by using the qmrestore command as shown here in the docs. So in your case it would be something like:

qmrestore /mnt/obaks/var/lib/vz/dump/{backup_file} {new_vmid} -storage {name_of_current_storage_to_use}


As I pointed out above - Once you've finished retrieving ALL the data you want from that sdf drive (you may have other data except for PVE backups), enter umount /dev/oldpve/root /mnt/obaks , shut down the node & remove that drive from node.
That disk should then be wiped correctly before being reused. After that you can probably use it again.

Good luck.
 
Got them!

My TrueNas (FreeNas) was attempting to backup all of my storage (6TB), so it just had log files that failed. I recovered the other VM's, and I am reading there is a path to restore TrueNas since my data drives are good.

Thanks so much for the help! I will take this opportunity to develop a better installation and backup plan, including off-machine backups.
 
  • Like
Reactions: gfngfn256

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!