formatted proxmox, how to restore backup please help me

Nov 22, 2021
48
0
11
44
Friends, my Proxmox gave a problem today and I needed to format it.

On the same server I had a 2TB HD that contained all my backups.

I don't use PBS.

How do I mount this backup HD and restore my data to the PROXMOX main HD?

Please help me, I'm desperate...
 
My server has 01 500GB SSD HD that has PROXMOX installed.

And there's another 500GB SSD HD that runs my VMs.

And there's a third 2TB HD that contains the backups that PROXMOX used to make.

But I don't know how to install these two HDs in the new installation of my PROXMOX.
 
Whats your output of fdisk -l, lsblk and cat /etc/pve/storage.cfg? What was the HDD initially formated with?

You probalby just need to add a line to your fstab so the HDD gets mounted on reboot. And then add a new Directory-Storage to your PVE that points to the mountpoint you used in the fstab.
 
Last edited:
Whats your output of fdisk -l, lsblk and cat /etc/pve/storage.cfg? What was the HDD initially formated with?

You probalby just need to add a line to your fstab so the HDD gets mounted on reboot. And then add a new Directory-Storage to your PVE that points to the mountpoint you used in the fstab.
Hello Friend,

But I don't know how to do that.
Could you help me?
 
root@oraculo:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
root@oraculo:~#
root@oraculo:~# fdisk -l
Disk /dev/sdc: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000DM001-1CH1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 2A1DA67F-119D-4576-ACFA-733281872A74

Device Start End Sectors Size Type
/dev/sdc1 2048 3907028991 3907026944 1.8T Linux filesystem


Disk /dev/sda: 447.13 GiB, 480103981056 bytes, 937703088 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E3C50E14-6A16-47F8-B234-415FAC75DA9C

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 937703054 936652431 446.6G Linux LVM


Disk /dev/sdb: 447.13 GiB, 480103981056 bytes, 937703088 sectors
Disk model: SATAFIRM S11
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 93C5F37C-803A-4539-807C-5B43CE6ECEBA

Device Start End Sectors Size Type
/dev/sdb1 2048 937701375 937699328 447.1G Linux filesystem


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@oraculo:~#
 
root@oraculo:~# fdisk -l
Disk /dev/sdc: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000DM001-1CH1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 2A1DA67F-119D-4576-ACFA-733281872A74

Device Start End Sectors Size Type
/dev/sdc1 2048 3907028991 3907026944 1.8T Linux filesystem


Disk /dev/sda: 447.13 GiB, 480103981056 bytes, 937703088 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E3C50E14-6A16-47F8-B234-415FAC75DA9C

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 937703054 936652431 446.6G Linux LVM


Disk /dev/sdb: 447.13 GiB, 480103981056 bytes, 937703088 sectors
Disk model: SATAFIRM S11
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 93C5F37C-803A-4539-807C-5B43CE6ECEBA

Device Start End Sectors Size Type
/dev/sdb1 2048 937701375 937699328 447.1G Linux filesystem


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@oraculo:~#
root@oraculo:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 447.1G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 446.6G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 3.3G 0 lvm
│ └─pve-data 253:4 0 320.1G 0 lvm
└─pve-data_tdata 253:3 0 320.1G 0 lvm
└─pve-data 253:4 0 320.1G 0 lvm
sdb 8:16 0 447.1G 0 disk
└─sdb1 8:17 0 447.1G 0 part
sdc 8:32 0 1.8T 0 disk
└─sdc1 8:33 0 1.8T 0 part
root@oraculo:~#
 
root@oraculo:~# lsblk -f | grep sdc
sdc
└─sdc1 ext4 1.0 BACKUP_1 aca2b5b6-ad69-4bec-85a7-55770cee2ebe
root@oraculo:~# lsblk -f | ls -l /dev/disk/by-id/ | grep sdc1
lrwxrwxrwx 1 root root 10 Feb 26 17:23 ata-ST2000DM001-1CH164_Z340PBRS-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Feb 26 17:23 wwn-0x5000c50065a14425-part1 -> ../../sdc1
root@oraculo:~#
 
Last edited:
And the output of lsblk -f | grep sdc and ls -l /dev/disk/by-id/ | grep sdc1?
root@oraculo:~# lsblk -f | ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root 9 Feb 26 17:23 ata-KINGSTON_SA400S37480G_50026B7683B6F33B -> ../../sda
lrwxrwxrwx 1 root root 10 Feb 26 17:23 ata-KINGSTON_SA400S37480G_50026B7683B6F33B-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Feb 26 17:23 ata-KINGSTON_SA400S37480G_50026B7683B6F33B-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Feb 26 17:23 ata-KINGSTON_SA400S37480G_50026B7683B6F33B-part3 -> ../../sda3
lrwxrwxrwx 1 root root 9 Feb 26 17:23 ata-SATAFIRM_S11_50026B77821020CE -> ../../sdb
lrwxrwxrwx 1 root root 10 Feb 26 17:23 ata-SATAFIRM_S11_50026B77821020CE-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 9 Feb 26 17:23 ata-ST2000DM001-1CH164_Z340PBRS -> ../../sdc
lrwxrwxrwx 1 root root 10 Feb 26 17:23 ata-ST2000DM001-1CH164_Z340PBRS-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Feb 26 17:23 dm-name-pve-root -> ../../dm-1
lrwxrwxrwx 1 root root 10 Feb 26 17:23 dm-name-pve-swap -> ../../dm-0
lrwxrwxrwx 1 root root 10 Feb 26 17:23 dm-uuid-LVM-wjfbAB4XqH72vosk84GhcDoyew85FwQRhQQk1CnfF3V13w2IRieK5yxfzxW4wTWA -> ../../dm-1
lrwxrwxrwx 1 root root 10 Feb 26 17:23 dm-uuid-LVM-wjfbAB4XqH72vosk84GhcDoyew85FwQRVqUOprbQieys34xP6bmhJl7t1Ytu6l4i -> ../../dm-0
lrwxrwxrwx 1 root root 10 Feb 26 17:23 lvm-pv-uuid-36QyWf-myNa-6tLj-sUZq-ftno-M74T-HJ82nN -> ../../sda3
lrwxrwxrwx 1 root root 9 Feb 26 17:23 wwn-0x5000c50065a14425 -> ../../sdc
lrwxrwxrwx 1 root root 10 Feb 26 17:23 wwn-0x5000c50065a14425-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 9 Feb 26 17:23 wwn-0x50026b7683b6f33b -> ../../sda
lrwxrwxrwx 1 root root 10 Feb 26 17:23 wwn-0x50026b7683b6f33b-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Feb 26 17:23 wwn-0x50026b7683b6f33b-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Feb 26 17:23 wwn-0x50026b7683b6f33b-part3 -> ../../sda3
lrwxrwxrwx 1 root root 9 Feb 26 17:23 wwn-0x50026b77821020ce -> ../../sdb
lrwxrwxrwx 1 root root 10 Feb 26 17:23 wwn-0x50026b77821020ce-part1 -> ../../sdb1
root@oraculo:~#
 
Then I would do this:
1.) Create a folder where you want to mount the HDD:
run mkdir /mnt/HDD
2.) edit your fstab so the HDD gets automounted at boot:
Run nano /etc/fstab. Add a line like this:
Code:
UUID=aca2b5b6-ad69-4bec-85a7-55770cee2ebe       /mnt/HDD             ext4    noatime,nodiratime                                                                      0       0
Exit with CTRL+X, Y
3.) mount the HDD:
Run mount -a
4.) Check what your folder structure on the HDD looks like and find the "dump" directory:
Run find /mnt/HDD/ -name "dump". Whats the output?
 
Then I would do this:
1.) Create a folder where you want to mount the HDD:
run mkdir /mnt/HDD
2.) edit your fstab so the HDD gets automounted at boot:
Run nano /etc/fstab. Add a line like this:
Code:
UUID=aca2b5b6-ad69-4bec-85a7-55770cee2ebe       /mnt/HDD             ext4    noatime,nodiratime                                                                      0       0
Exit with CTRL+X, Y
3.) mount the HDD:
Run mount -a
4.) Check what your folder structure on the HDD looks like and find the "dump" directory:
Run find /mnt/HDD/ -name "dump". Whats the output?
root@oraculo:~# find /mnt/HDD/ -name "dump"
/mnt/HDD/BACKUP_1/dump
root@oraculo:~#

I didn't quite understand what I did, but I followed your step by step.
But HD BACKUP_1 still doesn't appear in proxmox console.
 
root@oraculo:~# find /mnt/HDD/ -name "dump"
/mnt/HDD/BACKUP_1/dump
root@oraculo:~#

I didn't quite understand what I did, but I followed your step by step.
But HD BACKUP_1 still doesn't appear in proxmox console.
root@oraculo:~# cd /mnt/HDD/BACKUP_1
root@oraculo:/mnt/HDD/BACKUP_1# ls
dump images
root@oraculo:/mnt/HDD/BACKUP_1# cd dump
root@oraculo:/mnt/HDD/BACKUP_1/dump# ls
vzdump-qemu-100-2022_02_16-22_00_02.log vzdump-qemu-101-2022_02_18-23_00_02.vma.zst vzdump-qemu-102-2022_02_21-21_00_02.log
vzdump-qemu-100-2022_02_16-22_00_02.vma.zst vzdump-qemu-101-2022_02_19-23_00_02.log vzdump-qemu-102-2022_02_21-21_00_02.vma.zst
vzdump-qemu-100-2022_02_17-22_00_01.log vzdump-qemu-101-2022_02_19-23_00_02.vma.zst vzdump-qemu-102-2022_02_22-21_00_02.log
vzdump-qemu-100-2022_02_17-22_00_01.vma.zst vzdump-qemu-101-2022_02_20-23_00_02.log vzdump-qemu-102-2022_02_22-21_00_02.vma.zst
vzdump-qemu-100-2022_02_18-22_00_02.log vzdump-qemu-101-2022_02_20-23_00_02.vma.zst vzdump-qemu-104-2021_11_23-20_00_02.log
vzdump-qemu-100-2022_02_18-22_00_02.vma.zst vzdump-qemu-101-2022_02_21-23_00_02.log vzdump-qemu-104-2021_11_23-20_00_02.vma.zst
vzdump-qemu-100-2022_02_19-22_16_35.log vzdump-qemu-101-2022_02_21-23_00_02.vma.zst vzdump-qemu-104-2021_11_24-20_00_02.log
vzdump-qemu-100-2022_02_19-22_16_35.vma.zst vzdump-qemu-101-2022_02_22-23_00_02.log vzdump-qemu-104-2021_11_24-20_00_02.vma.zst
vzdump-qemu-100-2022_02_20-22_00_02.log vzdump-qemu-101-2022_02_22-23_00_02.vma.zst vzdump-qemu-104-2021_11_25-20_00_01.log
vzdump-qemu-100-2022_02_20-22_00_02.vma.zst vzdump-qemu-102-2021_11_21-21_00_01.log vzdump-qemu-104-2021_11_25-20_00_01.vma.zst
vzdump-qemu-100-2022_02_21-22_00_02.log vzdump-qemu-102-2021_11_30-21_00_02.log vzdump-qemu-104-2021_11_26-20_00_01.log
vzdump-qemu-100-2022_02_21-22_00_02.vma.zst vzdump-qemu-102-2022_02_16-21_00_02.log vzdump-qemu-104-2021_11_26-20_00_01.vma.zst
vzdump-qemu-100-2022_02_22-22_00_01.log vzdump-qemu-102-2022_02_16-21_00_02.vma.zst vzdump-qemu-104-2021_11_27-20_00_02.log
vzdump-qemu-100-2022_02_22-22_00_01.vma.zst vzdump-qemu-102-2022_02_17-21_00_02.log vzdump-qemu-104-2021_11_27-20_00_02.vma.zst
vzdump-qemu-101-2021_11_30-23_00_02.log vzdump-qemu-102-2022_02_17-21_00_02.vma.zst vzdump-qemu-104-2021_11_28-20_00_02.log
vzdump-qemu-101-2022_01_09-23_00_02.log vzdump-qemu-102-2022_02_18-21_00_02.log vzdump-qemu-104-2021_11_28-20_00_02.vma.zst
vzdump-qemu-101-2022_02_16-23_00_01.log vzdump-qemu-102-2022_02_18-21_00_02.vma.zst vzdump-qemu-104-2021_11_29-20_00_02.log
vzdump-qemu-101-2022_02_16-23_00_01.vma.zst vzdump-qemu-102-2022_02_19-21_00_02.log vzdump-qemu-104-2021_11_29-20_00_02.vma.zst
vzdump-qemu-101-2022_02_17-23_00_02.log vzdump-qemu-102-2022_02_19-21_00_02.vma.zst vzdump-qemu-104-2021_11_30-20_00_02.log
vzdump-qemu-101-2022_02_17-23_00_02.vma.zst vzdump-qemu-102-2022_02_20-21_00_02.log
vzdump-qemu-101-2022_02_18-23_00_02.log vzdump-qemu-102-2022_02_20-21_00_02.vma.zst
root@oraculo:/mnt/HDD/BACKUP_1/dump# ^C
root@oraculo:/mnt/HDD/BACKUP_1/dump#

Now I saw that through the shell my backups appear, but would I have to restore them manually?

How would you do that?
 
5.) Add a directory storage to your PVE and tell it to be used as mountpoint and for backups:
Run pvesm add dir HDD --path /mnt/HDD/BACKUP_1 --is_mountpoint yes --content backup

Now you should see a new storage called "HDD" in your webUI containing your backups. Select it in the webUI, go to its "Backup" tab, then select a guest you want to restore and hit the "Restore" button.

But probably you first want to get your VM storage working so you have a storage where you could restore your backups to.
 
Last edited:
5.) Add a directory storage to your PVE and tell it to be used as mountpoint and for backups:
Run pvesm add dir HDD --path /mnt/HDD/BACKUP_1 --is_mountpoint yes --content backup

Now you should see a new storage called "HDD" in your webUI containing your backups. Select it in the webUI, go to its "Backup" tab, then select a guest you want to restore and hit the "Restore" button.

But probably you first want to get your VM storage working so you have a storage where you could restore your backups to.
Ok, I understand, how do I format my 500GB SSD disk and mount it again in PROXMOX?

Then I will restore my backups on these disks.
 
5.) Add a directory storage to your PVE and tell it to be used as mountpoint and for backups:
Run pvesm add dir HDD --path /mnt/HDD/BACKUP_1 --is_mountpoint yes --content backup

Now you should see a new storage called "HDD" in your webUI containing your backups. Select it in the webUI, go to its "Backup" tab, then select a guest you want to restore and hit the "Restore" button.

But probably you first want to get your VM storage working so you have a storage where you could restore your backups to.
Dont work:

root@oraculo:/mnt/HDD/BACKUP_1/dump# pvesm add dir HDD --path /mnt/HDD/BACKUP_1 --is_mountpoint yes --content backup
create storage failed: unable to activate storage 'HDD' - directory is expected to be a mount point but is not mounted: '/mnt/HDD/BACKUP_1'
root@oraculo:/mnt/HDD/BACKUP_1/dump#
 
Hm, then maybe with this: pvesm add dir HDD --path /mnt/HDD/BACKUP_1 --content backup?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!