[TUTORIAL] PBS-client grub and just backing up partitions

Dunuin

Distinguished Member
Jun 30, 2020
14,793
4,614
258
Germany
Edit: Got this working. You might want to have a look here: https://forum.proxmox.com/threads/pbs-client-grub-and-just-backing-up-partitions.105990/post-568579

Old Post:

Hi,

Right now I'm backing up my main PVE node by booting into a Debian USB stick thats got a bash script that uses the proxmox-backup-client to create a blocklevel backup of my complete two 100GB mirrored system SSDs. This works fine so far as the SDDs are small and don't store guests.

But now I want to switch my two bare metal TrueNAS server from TrueNAS to PVE and virtualize TrueNAS. The cases can only fit 8 HDDs and 10 SSDs and I already got 8 HDDs and 8 SSDs attached to HBAs I want to passthrough to my TrueNAS VM so there are only 2 SSDs left for PVE. So I can't have dedicated system and VM storage SSDs.
I would prefer to do it similar to my main PVE server as there everything worked flavlessly.

My main PVE node looks like this with PVE ontop of a Debian 11 that uses mdadm + LUKS + grub:
Code:
root@Hypervisor:~# lsblk
NAME                              MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                 8:0    0  93.2G  0 disk
├─sda1                              8:1    0     1M  0 part
├─sda2                              8:2    0   286M  0 part
├─sda3                              8:3    0   488M  0 part
│ └─md0                             9:0    0   487M  0 raid1 /boot
└─sda4                              8:4    0  92.4G  0 part
  └─md1                             9:1    0  92.3G  0 raid1
    └─md1_crypt                   253:0    0  92.3G  0 crypt
      ├─vgpmx-lvroot              253:1    0    21G  0 lvm   /
      └─vgpmx-lvswap              253:2    0    61G  0 lvm   [SWAP]
sdb                                 8:16   0  93.2G  0 disk
├─sdb1                              8:17   0     1M  0 part
├─sdb2                              8:18   0   286M  0 part  /boot/efi
├─sdb3                              8:19   0   488M  0 part
│ └─md0                             9:0    0   487M  0 raid1 /boot
└─sdb4                              8:20   0  92.4G  0 part
  └─md1                             9:1    0  92.3G  0 raid1
    └─md1_crypt                   253:0    0  92.3G  0 crypt
      ├─vgpmx-lvroot              253:1    0    21G  0 lvm   /
      └─vgpmx-lvswap              253:2    0    61G  0 lvm   [SWAP]

So my Disk layout for the new PVE nodes would look like this (like above but with additional ZFS partition for guests and grub/boot partition a bit bigger and system partition smaller to have more space for guests):
sda1 - 8 MB - grub partition
sda2 - 512 MB - not used but reserved for ESP if I later would need to switch from grub to systemd boot
sda3 - 1024 MB - mdadm mirror with sdb3: stores unencrypted boot partition
sda4 - 30 GB - mdadm mirror with sdb4 -> LUKS -> LVM: stores encrypted root and swap LVs
sda5 - 168 or 368 GB - ZFS mirror with sdb5 -> encrypted datasets: stores guests
sdb1 - 8 MB - grub partition
sdb2 - 512 MB - not used but reserved for ESP if I later would need to switch from grub to systemd boot
sdb3 - 1024 MB -> mdadm mirror with sda3: stores unencrypted boot partition
sdb4 - 30 GB - mdadm mirror with sda4 -> LUKS -> LVM: stores encrypted root and swap LVs
sdb5 - 168 or 368 GB - ZFS mirror with sda5 -> encrypted datasets: stores guests

As all my data on sda4, sda5, sdb4, sdb5 would be encrypted, including my guests, so just backing up the complete disks would be wasting space as the disk images couldn't be deduplicated with the other backups of the guests. Only way I see to not backup the guests twice would be to just backup sda1 to sda4 + sdb1 to sdb4 with proxmox-backup-client so sda5 and sdb5 would be excluded.

But then I'm not sure if and how the restore would work. I would setup everything using the Debian Installer using grub and then install the PVE package ontop (like I already did it with my main PVE node). So the questions are:

1.) I'm not a grub expert but if I remeber right grub needed to write data to the first 1MB of unallocated space of the boot disk? If that is still the case and I would restore sda1 to sda4 or sdb1 to sdb4 from the PBS to a new disk, would that disk be bootable or is there some grub stuff missing then?
2.) How does it work to restore partitions using proxmox-backup-client so that I get a identical disk as before just with sda5/sdb5 missing that I would then create myself and use them to setup a new ZFS pool where I would restore my guests from PBS to?
3.) Is there maybe a way to exclude partitions with the PBS-client like excluding files/folders? So I could backup the entire disk just with sda5/sdb5 not included?
 
Last edited:

BIOS Grub – Backing up MBR​

While many Linux users are transitioning to using EFI as the standard, lots of users still use the BIOS version of Grub, because not every computer can run EFI well. If you have a BIOS install of Linux, your Grub bootloader makes use of Master Boot Record. This means that during your Linux OS’s installation, the bootloader was installed in the very first sectors on your hard drive, rather than in a folder, like with Grub EFI varients.
So according to here the Grub is written to the first sectors of a MBR disk. Like I remembered it. But my UEFI is set to legacy mode so it should use BIOS grub and not UEFI to boot. But my system disk uses GPT. So I use BIOS with grub and GPT and got a 1MB partition of type "BIOS boot":

Code:
root@Hypervisor:~# fdisk -l /dev/sda
Disk /dev/sda: 93.16 GiB, 100030242816 bytes, 195371568 sectors
Disk model: INTEL SSDSC2BA10
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 4F32D11F-F179-4FEB-AF41-E59B6DF20956

Device       Start       End   Sectors  Size Type
/dev/sda1     2048      4095      2048    1M BIOS boot
/dev/sda2     4096    589823    585728  286M Microsoft basic data
/dev/sda3   589824   1589247    999424  488M Linux RAID
/dev/sda4  1589248 195371007 193781760 92.4G Linux RAID

Is Debian with PVE ontop then only booting from this sda1 partition or are the first sectors outside of any partition still needed for grub to boot? So if I would just backup sda1, sda2, sda3, sda4 and restore them to a new disk, would that new disk be able to boot?
 
Still the same question but now with different partition layout and EFI instead of grub.

PVE installed using a ZFS mirror limited to 32GiB size with UEFI (so systemd boot) on 200/400GB SSDs.
So first partition is the 1MB one.
Second partition the 512MB or 1024MB ESP partition.
Third partition the ~32GB ZFS partition for the rpool.
Fourth partition is swap I don't want to backup.
Fifth partition is ZFS to store my virtual disks/data and I also don't want to back them up because they are already backupped by PBS on file/guest level.

What would be the best way to back up those disks on block level to my PBS so I can just restore the first 3 partitions and the partition table to an empty disk so I then later only have to fix the GPT partition table (because the partition table data at the end of the disk wouldn't be backed up?), create two new partitions for swap/zfs (so 4th and 5th partition), format one partitions for swap, create a new ZFS mirror for my guests and then restore my guests from the PBS later?

I'm already low on storage and need to backup four PVE servers and 100+100+200+200+200+200+400+400GB of blocks would really be wasted space to just backup the bootloaders and 32GB rpools, even if deduplication is saving half of the space as the second disk of each mirror could be deduplicated. Still 900GB of blocks that could be just 132GB.
 
Would this be the correct way to do it?

Lets say I got a ZFS mirror "rpool" booting using systemd and both GPT partitioned disks look like this:
Code:
Disk /dev/sda: 781422768 sectors, 372.6 GiB
Model: INTEL SSDSC2BA40
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 6AA1B2CD-6D29-4DAE-B128-28CCA06A63DC
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 781422734
Partitions will be aligned on 8-sector boundaries
Total free space is 2047 sectors (1023.5 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34            2047   1007.0 KiB  EF02
   2            2048         2099199   1024.0 MiB  EF00
   3         2099200        67108864   31.0 GiB    BF01
   4        67110912        83888127   8.0 GiB     8300
   5        83888128       781422734   332.6 GiB   8300
Where partitions 1-3 were created by the PVE installer (but I encrypted them afterwards), partition 4 is for LUKS encrypted swap and partition 5 is another encrypted ZFS pool storing my data and virtual disks. Data and guest are already backed up to PBS, so I don't want to backup this again.

To backup the disks without partitions 4 and 5, booted from a Debian pen drive:
Code:
# define repo and password:
export PBS_REPOSITORY="user@usertype!token@host:port:datastore"
export PBS_PASSWORD="myPass"

# create new temp folder for GPT backups
mkdir /tmp/GPT_backup

# create backup of partition tables (GPT)
sgdisk -b=/tmp/GPT_backup/GPT_backup_disk1.bin /dev/disk/by-id/ata-Disk1
sgdisk -b=/tmp/GPT_backup/GPT_backup_disk2.bin /dev/disk/by-id/ata-Disk2

# backup GPT backups as well as partitions 1 to 3 of both disks to PBS
proxmox-backup-client backup \
   "gpt_backup.pxar:/tmp/GPT_backup" \
   "disk1_part1.img:/dev/disk/by-id/ata-Disk1-part1" \
   "disk1_part2.img:/dev/disk/by-id/ata-Disk1-part2" \
   "disk1_part3.img:/dev/disk/by-id/ata-Disk1-part3" \
   "disk2_part1.img:/dev/disk/by-id/ata-Disk2-part1" \
   "disk2_part2.img:/dev/disk/by-id/ata-Disk2-part2" \
   "disk2_part3.img:/dev/disk/by-id/ata-Disk2-part3" \
   --ns "some/namespace" --backup-type "host" --crypt-mode "none" --backup-id "myHostname"

# clean up
rm -r /tmp/GPT_backup

To restore both disks later, booted from a Debian pen drive:
Code:
# define repo and password:
export PBS_REPOSITORY="user@usertype!token@host:port:datastore"
export PBS_PASSWORD="myPass"

# create new mountpoint folder for GPT backups
mkdir /mnt/GPT_backup

# mount archive with the partition table backups, restore partition tables and unmount it
proxmox-backup-client mount "myHostname/someSnapshot" "gpt_backup.pxar" "/mnt/GPT_backup" --ns "some/namespace"
sgdisk -l=/mnt/GPT_backup/GPT_backup_disk1.bin /dev/disk/by-id/ata-Disk1
sgdisk -l=/mnt/GPT_backup/GPT_backup_disk2.bin /dev/disk/by-id/ata-Disk2
umount /mnt/GPT_backup

# map and restore disk 1 partition 1
proxmox-backup-client map "myHostname/someSnapshot" "disk1_part1.img" --ns "some/namespace"
dd if=/dev/loop0 of=/dev/disk/by-id/ata-Disk1-part1 bs=1M conv=noerror,sync status=progress
umount /dev/loop0

# map and restore disk 1 partition 2
proxmox-backup-client map "myHostname/someSnapshot" "disk1_part2.img" --ns "some/namespace"
dd if=/dev/loop0 of=/dev/disk/by-id/ata-Disk1-part2 bs=1M conv=noerror,sync status=progress
umount /dev/loop0

# map and restore disk 1 partition 3
proxmox-backup-client map "myHostname/someSnapshot" "disk1_part3.img" --ns "some/namespace"
dd if=/dev/loop0 of=/dev/disk/by-id/ata-Disk1-part3 bs=1M conv=noerror,sync status=progress
umount /dev/loop0

# map and restore disk 2 partition 1
proxmox-backup-client map "myHostname/someSnapshot" "disk2_part1.img" --ns "some/namespace"
dd if=/dev/loop0 of=/dev/disk/by-id/ata-Disk2-part1 bs=1M conv=noerror,sync status=progress
umount /dev/loop0

# map and restore disk 2 partition 2
proxmox-backup-client map "myHostname/someSnapshot" "disk2_part2.img" --ns "some/namespace"
dd if=/dev/loop0 of=/dev/disk/by-id/ata-Disk2-part2 bs=1M conv=noerror,sync status=progress
umount /dev/loop0

# map and restore disk 2 partition 3
proxmox-backup-client map "myHostname/someSnapshot" "disk2_part3.img" --ns "some/namespace"
dd if=/dev/loop0 of=/dev/disk/by-id/ata-Disk2-part3 bs=1M conv=noerror,sync status=progress
umount /dev/loop0

# creating encrypted swap on partition 4 is done later when booted into restored PVE

# create new ZFS pool on partition 5
zpool create dpool -o ashift=12 mirror /dev/disk/by-id/ata-disk1-part5 /dev/disk/by-id/ata-disk2-part5
zpool export dpool
# creating encrypted datasets, adding PVE storages and restoring guests + data is done later when booted into restored PVE

# clean up
rmdir /mnt/GPT_backup

Would this be the correct approach?
I really would like to get this working, so I could back up all my PVE hosts before upgrading them to PVE8 so I can restore them if something isn't working...

Is there an easy way to find out what img got mapped to which loopback device so I can check that I'm not referencing the wrong one when doing my dd?


Reference:
https://wiki.archlinux.org/title/GPT_fdisk#Backup_and_restore_partition_table
https://wiki.archlinux.org/title/Dd#Disk_cloning_and_restore
https://pbs.proxmox.com/docs/backup-client.html
https://pbs.proxmox.com/docs/command-syntax.html#proxmox-backup-client
 
Last edited:
I've set up some PVE and PBS VMs for testing it but looks like the proxmox-backup-client can't open my partitions when running this:

Bash:
# define repo and password:
export PBS_REPOSITORY="root@pam@192.168.43.81:8007:TestDS1"
export PBS_PASSWORD="MyPass"

proxmox-backup-client backup \
   "gpt_backup.pxar:/tmp/GPT_backup/" \
   "disk1_part1.img:/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1" \
   "disk1_part2.img:/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part2" \
   "disk1_part3.img:/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part3" \
   "disk2_part1.img:/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1" \
   "disk2_part2.img:/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part2" \
   "disk2_part3.img:/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part3" \
   --ns "TestNS" --backup-type "host" --crypt-mode "none" --backup-id "TestPVE2"

It fails with: Error: unable to access '/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1' - No such file or directory (os error 2)

But the path looks good to me:
Code:
root@TestPVE2Backup:~# ls -la /dev/disk/by-id
total 0
drwxr-xr-x 2 root root 480 Jun 21 14:59 .
drwxr-xr-x 8 root root 160 Jun 21 14:31 ..
lrwxrwxrwx 1 root root   9 Jun 21 14:31 ata-QEMU_DVD-ROM_QM00003 -> ../../sr0
lrwxrwxrwx 1 root root  10 Jun 21 14:31 dm-name-TestPVE2Backup--vg-root -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jun 21 14:31 dm-name-TestPVE2Backup--vg-swap_1 -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jun 21 14:31 dm-uuid-LVM-Mqh50AD3fsOb5gdG8DaZp2YnGt8vik67dJid8rzbwbkuf4eZ552ILr7hzlflq0y9 -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jun 21 14:31 dm-uuid-LVM-Mqh50AD3fsOb5gdG8DaZp2YnGt8vik67NcnhA5SmPYbUl4IPfG9YNLHZpXAYrvCV -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jun 21 14:31 lvm-pv-uuid-IGgZ5k-Z9PI-C8KD-jiY6-Rcbq-FRos-a5ySFz -> ../../sda3
lrwxrwxrwx 1 root root   9 Jun 21 14:31 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0 -> ../../sda
lrwxrwxrwx 1 root root  10 Jun 21 14:31 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1 -> ../../sda1
lrwxrwxrwx 1 root root  10 Jun 21 14:31 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part2 -> ../../sda2
lrwxrwxrwx 1 root root  10 Jun 21 14:31 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part3 -> ../../sda3
lrwxrwxrwx 1 root root   9 Jun 21 14:31 scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 -> ../../sdb
lrwxrwxrwx 1 root root  10 Jun 21 14:31 scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1 -> ../../sdb1
lrwxrwxrwx 1 root root  10 Jun 21 14:31 scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part2 -> ../../sdb2
lrwxrwxrwx 1 root root  10 Jun 21 14:31 scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  10 Jun 21 14:31 scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part4 -> ../../sdb4
lrwxrwxrwx 1 root root  10 Jun 21 14:31 scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part5 -> ../../sdb5
lrwxrwxrwx 1 root root   9 Jun 21 14:59 scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 -> ../../sdc
lrwxrwxrwx 1 root root  10 Jun 21 14:59 scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 -> ../../sdc1
lrwxrwxrwx 1 root root  10 Jun 21 14:59 scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part2 -> ../../sdc2
lrwxrwxrwx 1 root root  10 Jun 21 14:59 scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part3 -> ../../sdc3
lrwxrwxrwx 1 root root  10 Jun 21 14:59 scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part4 -> ../../sdc4
lrwxrwxrwx 1 root root  10 Jun 21 14:59 scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part5 -> ../../sdc5

root@TestPVE2Backup:~# lsblk
NAME                          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                             8:0    0   16G  0 disk
├─sda1                          8:1    0  512M  0 part /boot/efi
├─sda2                          8:2    0  488M  0 part /boot
└─sda3                          8:3    0   15G  0 part
  ├─TestPVE2Backup--vg-root   254:0    0   14G  0 lvm  /
  └─TestPVE2Backup--vg-swap_1 254:1    0  976M  0 lvm  [SWAP]
sdb                             8:16   0   32G  0 disk
├─sdb1                          8:17   0 1007K  0 part
├─sdb2                          8:18   0  512M  0 part
├─sdb3                          8:19   0 15.5G  0 part
├─sdb4                          8:20   0    2G  0 part
└─sdb5                          8:21   0   14G  0 part
sdc                             8:32   0   32G  0 disk
├─sdc1                          8:33   0 1007K  0 part
├─sdc2                          8:34   0  512M  0 part
├─sdc3                          8:35   0 15.5G  0 part
├─sdc4                          8:36   0    2G  0 part
└─sdc5                          8:37   0   14G  0 part
sr0                            11:0    1 1024M  0 rom

root@TestPVE2Backup:~# fdisk -l /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1
Disk /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1: 1007 KiB, 1031168 bytes, 2014 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
 
Finally got this working...

In case someone in the future needs to do the same thing, here is how it works:

1.) Install a headless debian on a pendrive or whatever and boot from it instead of your PVE, so the PVE system disks aren't mounted
2.) install the proxmox-backup-client on debian like described here: https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-client-on-debian
3.) install gdisk on debian: apt update && apt install gdisk
4.) create a backup script similar to this and adapt it to your needs:
Bash:
#! /bin/bash

# required packages that doesn't come with Debian:
# - proxmox-backup-client
# - gdisk

# define PBS repo, password and namespace:
export PBS_REPOSITORY="root@pam@192.168.43.81:8007:TestDS1"
export PBS_PASSWORD="YourPass"
namespace="TestNS"

# define disks to backup and backup ID
disk1="scsi-0QEMU_QEMU_HARDDISK_drive-scsi1"
disk2="scsi-0QEMU_QEMU_HARDDISK_drive-scsi2"
backupid="TestPVE2"

# create new temp folder for GPT backups
if [ ! -d "/tmp/GPT_backup" ]; then
    mkdir /tmp/GPT_backup
fi

# create backup of partition tables (GPT)
sgdisk -b="/tmp/GPT_backup/GPT_backup_${disk1}.bin" "/dev/disk/by-id/${disk1}"
sgdisk -b="/tmp/GPT_backup/GPT_backup_${disk2}.bin" "/dev/disk/by-id/${disk2}"

# hash partitions to be able to verify restore later
echo "Hashing partitions..."
touch /tmp/GPT_backup/sha256.txt
sha256sum "/dev/disk/by-id/${disk1}-part1" | tee /tmp/GPT_backup/sha256.txt
sha256sum "/dev/disk/by-id/${disk1}-part2" | tee -a /tmp/GPT_backup/sha256.txt
sha256sum "/dev/disk/by-id/${disk1}-part3" | tee -a /tmp/GPT_backup/sha256.txt
sha256sum "/dev/disk/by-id/${disk2}-part1" | tee -a /tmp/GPT_backup/sha256.txt
sha256sum "/dev/disk/by-id/${disk2}-part2" | tee -a /tmp/GPT_backup/sha256.txt
sha256sum "/dev/disk/by-id/${disk2}-part3" | tee -a /tmp/GPT_backup/sha256.txt

# backup GPT backups, sha256 hashes as well as partitions 1 to 3 of both disks to PBS
proxmox-backup-client backup \
    "gpt_backup.pxar:/tmp/GPT_backup/" \
    "${disk1}-part1.img:/dev/disk/by-id/${disk1}-part1" \
    "${disk1}-part2.img:/dev/disk/by-id/${disk1}-part2" \
    "${disk1}-part3.img:/dev/disk/by-id/${disk1}-part3" \
    "${disk2}-part1.img:/dev/disk/by-id/${disk2}-part1" \
    "${disk2}-part2.img:/dev/disk/by-id/${disk2}-part2" \
    "${disk2}-part3.img:/dev/disk/by-id/${disk2}-part3" \
    --ns "${namespace}" \
    --backup-type host \
    --crypt-mode none \
    --backup-id "${backupid}"

# clean up
if [ -d "/tmp/GPT_backup" ]; then
    rm -r /tmp/GPT_backup
fi
5.) run that script to do the backup
6.) run a verify on the PBS to be sure that the backup is valid

If you then ever need to do a restore you create a restore script like this:
Bash:
#! /bin/bash

# required packages that doesn't come with Debian:
# - proxmox-backup-client
# - gdisk

# define PBS repo, password and namespace:
export PBS_REPOSITORY="root@pam@192.168.43.81:8007:TestDS1"
export PBS_PASSWORD="YourPass"
namespace="TestNS"

# define name of old/new disks and backup ID
olddisk1="scsi-0QEMU_QEMU_HARDDISK_drive-scsi1"
newdisk1="scsi-0QEMU_QEMU_HARDDISK_drive-scsi3"
olddisk2="scsi-0QEMU_QEMU_HARDDISK_drive-scsi2"
newdisk2="scsi-0QEMU_QEMU_HARDDISK_drive-scsi4"
snapshot="2023-06-28T16:40:07Z"
backupid="TestPVE2"

# ask for confirmation
while true; do
    read -p "Do you really want do a restore? Disks "${newdisk1}" and "${newdisk2}" will be wiped! If yes write 'YesNukeEm'" answer
    case $answer in
        YesNukeEm ) break;;
        * ) exit;;
    esac
done

# create new mountpoint folder for GPT backups
if [ ! -d "/mnt/GPT_backup" ]; then
    mkdir /mnt/GPT_backup
fi

# mount archive with the partition table backups
proxmox-backup-client mount \
    "host/${backupid}/${snapshot}" \
    "gpt_backup.pxar" \
    "/mnt/GPT_backup" \
    --ns "${namespace}"
   
# restore partition tables
sgdisk -l="/mnt/GPT_backup/GPT_backup_${olddisk1}.bin" "/dev/disk/by-id/${newdisk1}"
sgdisk -l="/mnt/GPT_backup/GPT_backup_${olddisk2}.bin" "/dev/disk/by-id/${newdisk2}"

# move backup GPT header to the end of the disk in case the new disk might be bigger
sgdisk -e "/dev/disk/by-id/${newdisk1}"
sgdisk -e "/dev/disk/by-id/${newdisk2}"

# delete the 5th partitions
sgdisk -d=5 "/dev/disk/by-id/${newdisk1}"
sgdisk -d=5 "/dev/disk/by-id/${newdisk2}"

# create 5th partitions using the full remaining space in case the new disks is bigger so no space is wasted
sgdisk -n=5:0:0 "/dev/disk/by-id/${newdisk1}"
sgdisk -n=5:0:0 "/dev/disk/by-id/${newdisk2}"

# restore disk 1 partition 1 from backup and verify it
proxmox-backup-client restore \
    "host/${backupid}/${snapshot}" \
    "${olddisk1}-part1.img" \
    - \
    --ns "${namespace}" \
    > "/dev/disk/by-id/${newdisk1}-part1"
oldhash=$(sed -n '1{p;q}' /mnt/GPT_backup/sha256.txt | cut -d" " -f1)
newhash=$(sha256sum "/dev/disk/by-id/${newdisk1}-part1" | tee /tmp/new_sha256.txt | cut -d" " -f1)
if [ "${oldhash}" = "${newhash}" ]; then
    echo "Partition '${newdisk1}-part1' successfully verified."
else
    echo "Partition '${newdisk1}-part1' verification failed! sha256 hash '${oldhash}' expected but '${newhash}' found! Aborting restore..."
    exit 1
fi

# restore disk 1 partition 2 from backup and verify it
proxmox-backup-client restore \
    "host/${backupid}/${snapshot}" \
    "${olddisk1}-part2.img" \
    - \
    --ns "${namespace}" \
    > "/dev/disk/by-id/${newdisk1}-part2"
oldhash=$(sed -n '2{p;q}' /mnt/GPT_backup/sha256.txt | cut -d" " -f1)
newhash=$(sha256sum "/dev/disk/by-id/${newdisk1}-part2" | tee -a /tmp/new_sha256.txt | cut -d" " -f1)
if [ "${oldhash}" = "${newhash}" ]; then
    echo "Partition '${newdisk1}-part2' successfully verified."
else
    echo "Partition '${newdisk1}-part2' verification failed! sha256 hash '${oldhash}' expected but '${newhash}' found! Aborting restore..."
    exit 1
fi

# restore disk 1 partition 3 from backup and verify it
proxmox-backup-client restore \
    "host/${backupid}/${snapshot}" \
    "${olddisk1}-part3.img" \
    - \
    --ns "${namespace}" \
    > "/dev/disk/by-id/${newdisk1}-part3"
oldhash=$(sed -n '3{p;q}' /mnt/GPT_backup/sha256.txt | cut -d" " -f1)
newhash=$(sha256sum "/dev/disk/by-id/${newdisk1}-part3" | tee -a /tmp/new_sha256.txt | cut -d" " -f1)
if [ "${oldhash}" = "${newhash}" ]; then
    echo "Partition '${newdisk1}-part3' successfully verified."
else
    echo "Partition '${newdisk1}-part3' verification failed! sha256 hash '${oldhash}' expected but '${newhash}' found! Aborting restore..."
    exit 1
fi

# restore disk 2 partition 1 from backup and verify it
proxmox-backup-client restore \
    "host/${backupid}/${snapshot}" \
    "${olddisk2}-part1.img" \
    - \
    --ns "${namespace}" \
    > "/dev/disk/by-id/${newdisk2}-part1"
oldhash=$(sed -n '4{p;q}' /mnt/GPT_backup/sha256.txt | cut -d" " -f1)
newhash=$(sha256sum "/dev/disk/by-id/${newdisk2}-part1" | tee -a /tmp/new_sha256.txt | cut -d" " -f1)
if [ "${oldhash}" = "${newhash}" ]; then
    echo "Partition '${newdisk2}-part1' successfully verified."
else
    echo "Partition '${newdisk2}-part1' verification failed! sha256 hash '${oldhash}' expected but '${newhash}' found! Aborting restore..."
    exit 1
fi

# restore disk 2 partition 2 from backup and verify it
proxmox-backup-client restore \
    "host/${backupid}/${snapshot}" \
    "${olddisk2}-part2.img" \
    - \
    --ns "${namespace}" \
    > "/dev/disk/by-id/${newdisk2}-part2"
oldhash=$(sed -n '5{p;q}' /mnt/GPT_backup/sha256.txt | cut -d" " -f1)
newhash=$(sha256sum "/dev/disk/by-id/${newdisk2}-part2" | tee -a /tmp/new_sha256.txt | cut -d" " -f1)
if [ "${oldhash}" = "${newhash}" ]; then
    echo "Partition '${newdisk2}-part2' successfully verified."
else
    echo "Partition '${newdisk2}-part2' verification failed! sha256 hash '${oldhash}' expected but '${newhash}' found! Aborting restore..."
    exit 1
fi

# restore disk 2 partition 3 from backup and verify it
proxmox-backup-client restore \
    "host/${backupid}/${snapshot}" \
    "${olddisk2}-part3.img" \
    - \
    --ns "${namespace}" \
    > "/dev/disk/by-id/${newdisk2}-part3"
oldhash=$(sed -n '6{p;q}' /mnt/GPT_backup/sha256.txt | cut -d" " -f1)
newhash=$(sha256sum "/dev/disk/by-id/${newdisk2}-part3" | tee -a /tmp/new_sha256.txt | cut -d" " -f1)
if [ "${oldhash}" = "${newhash}" ]; then
    echo "Partition '${newdisk2}-part3' successfully verified."
else
    echo "Partition '${newdisk2}-part3' verification failed! sha256 hash '${oldhash}' expected but '${newhash}' found! Aborting restore..."
    exit 1
fi

# unmount the backup archive containing the GPT backups and hashes
umount /mnt/GPT_backup

# clean up
if [ -d "/mnt/GPT_backup" ]; then
    rmdir /mnt/GPT_backup
fi
if [ -f "/tmp/new_sha256.txt" ]; then
    rm /tmp/new_sha256.txt
fi

This will then:
- clone the GPT partition table from the backup on the PBS to the new disk
- move the backup GPT header to the end of the disk in case you restore to a bigger disk (in this case I restored a backup of a 32GiB disk to a empty 40GiB disk)
- destroy the 5th partitions and recreate it to fill all the available space so no space is wasted when using a bigger disk. In my case partitions 1 to 3 of the ZFS mirror are the default partition the PVE installer is creating (so legacy partition, ESP partition and partition for rpool). The 4th partition I manually created for swap and the 5th partition I created for another ZFS mirror storing my guests.
- restore partitions 1 to 3 of both backupped disks to partitions 1-3 on the two new disks
- checksum the new partitions, after the restore is finished, and compare it with the checksums that were created when the backup was done, so you know if the restored partition is exactly the same

You then should have a running PVE again, just now with empty 4th and 5th partitions. So later you can manually format these for swap or ZFS and restore your guests.

You might need to edit the fstab, if partitions like my 4th swap partition can't by mounted anymore, as they are now unformated.
 
Last edited:
  • Like
Reactions: orionus and UdoB
Some logs from my test:
Code:
root@TestPVE2Backup:~/scripts# lsblk
NAME                          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                             8:0    0   16G  0 disk
├─sda1                          8:1    0  512M  0 part /boot/efi
├─sda2                          8:2    0  488M  0 part /boot
└─sda3                          8:3    0   15G  0 part
  ├─TestPVE2Backup--vg-root   254:0    0   14G  0 lvm  /
  └─TestPVE2Backup--vg-swap_1 254:1    0  976M  0 lvm  [SWAP]
sdb                             8:16   0   32G  0 disk
├─sdb1                          8:17   0 1007K  0 part
├─sdb2                          8:18   0  512M  0 part
├─sdb3                          8:19   0 15.5G  0 part
├─sdb4                          8:20   0    2G  0 part
└─sdb5                          8:21   0   14G  0 part
sdc                             8:32   0   32G  0 disk
├─sdc1                          8:33   0 1007K  0 part
├─sdc2                          8:34   0  512M  0 part
├─sdc3                          8:35   0 15.5G  0 part
├─sdc4                          8:36   0    2G  0 part
└─sdc5                          8:37   0   14G  0 part
sdd                             8:48   0   40G  0 disk
sde                             8:64   0   40G  0 disk
sr0                            11:0    1 1024M  0 rom


root@TestPVE2Backup:~/scripts# /root/scripts/backup.sh
The operation has completed successfully.
The operation has completed successfully.
Hashing partitions...
2c4881bc6abcae1bfe2c92d042a3ae2903e3245732a86762c84be3f7e5b50705  /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1
f1d5d064e5777ed1f52d18e2714097d681e3932b368255ff4bf7b6bfff9b6dc7  /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part2
9bf29279e817e172d339d44e1265f5adc4d689ca6feb157b4a99579b71a00790  /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part3
2c4881bc6abcae1bfe2c92d042a3ae2903e3245732a86762c84be3f7e5b50705  /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1
7c8b7077a45942daa8e754b2f4aef3e20c747735b8b1a8480f881da82d99d0bd  /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part2
30cf794ce24e4281fbdbb577e9756fa9ecfcd6dd48dadc04271a81bfa4cce1a2  /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part3
Starting backup: [TestNS]:host/TestPVE2/2023-06-28T16:40:07Z
Client name: TestPVE2Backup
Starting backup protocol: Wed Jun 28 18:40:07 2023
No previous manifest available.
Upload directory '/tmp/GPT_backup/' to 'root@pam@192.168.43.81:8007:TestDS1' as gpt_backup.pxar.didx
gpt_backup.pxar: had to backup 36.267 KiB of 36.267 KiB (compressed 1.032 KiB) in 0.01s
gpt_backup.pxar: average backup speed: 2.403 MiB/s
Upload image '/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1' to 'root@pam@192.168.43.81:8007:TestDS1' as scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1.img.fidx
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1.img: had to backup 1007 KiB of 1007 KiB (compressed 59 B) in 0.04s
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1.img: average backup speed: 27.776 MiB/s
Upload image '/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part2' to 'root@pam@192.168.43.81:8007:TestDS1' as scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part2.img.fidx
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part2.img: had to backup 512 MiB of 512 MiB (compressed 509.836 MiB) in 9.68s
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part2.img: average backup speed: 52.887 MiB/s
Upload image '/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part3' to 'root@pam@192.168.43.81:8007:TestDS1' as scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part3.img.fidx
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part3.img: had to backup 3.624 GiB of 15.499 GiB (compressed 2.358 GiB) in 217.17s
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part3.img: average backup speed: 17.088 MiB/s
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part3.img: backup was done incrementally, reused 11.875 GiB (76.6%)
Upload image '/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1' to 'root@pam@192.168.43.81:8007:TestDS1' as scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1.img.fidx
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1.img: had to backup 1007 KiB of 1007 KiB (compressed 59 B) in 0.04s
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1.img: average backup speed: 23.469 MiB/s
Upload image '/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part2' to 'root@pam@192.168.43.81:8007:TestDS1' as scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part2.img.fidx
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part2.img: had to backup 512 MiB of 512 MiB (compressed 509.836 MiB) in 11.51s
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part2.img: average backup speed: 44.485 MiB/s
Upload image '/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part3' to 'root@pam@192.168.43.81:8007:TestDS1' as scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part3.img.fidx
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part3.img: had to backup 3.624 GiB of 15.499 GiB (compressed 2.358 GiB) in 216.78s
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part3.img: average backup speed: 17.119 MiB/s
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part3.img: backup was done incrementally, reused 11.875 GiB (76.6%)
Uploaded backup catalog (186 B)
Duration: 455.39s
End Time: Wed Jun 28 18:47:42 2023


root@TestPVE2Backup:~/scripts# /root/scripts/restore.sh
Do you really want do a restore? Disks scsi-0QEMU_QEMU_HARDDISK_drive-scsi3 and scsi-0QEMU_QEMU_HARDDISK_drive-scsi4 will be wiped! If yes write 'YesNukeEm'YesNukeEm
FUSE library version: 3.10.3
Warning: Partition table header claims that the size of partition table
entries is 0 bytes, but this program  supports only 128-byte entries.
Adjusting accordingly, but partition table may be garbage.
Warning: Partition table header claims that the size of partition table
entries is 0 bytes, but this program  supports only 128-byte entries.
Adjusting accordingly, but partition table may be garbage.
Creating new GPT entries in memory.
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.bin
Warning! Current disk size doesn't match that of the backup!
Adjusting sizes to match, but subsequent problems are possible!
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.bin
The operation has completed successfully.
Warning: Partition table header claims that the size of partition table
entries is 0 bytes, but this program  supports only 128-byte entries.
Adjusting accordingly, but partition table may be garbage.
Warning: Partition table header claims that the size of partition table
entries is 0 bytes, but this program  supports only 128-byte entries.
Adjusting accordingly, but partition table may be garbage.
Creating new GPT entries in memory.
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi2.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi2.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi2.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi2.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi2.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi2.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi2.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi2.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi2.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi2.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi2.bin
Warning! Current disk size doesn't match that of the backup!
Adjusting sizes to match, but subsequent problems are possible!
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi2.bin
Error 38 when determining sector size! Setting sector size to 512
Disk device is /mnt/GPT_backup/GPT_backup_scsi-0QEMU_QEMU_HARDDISK_drive-scsi2.bin
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.
restore image complete (bytes=1031168, duration=0.01s, speed=124.43MB/s)
Partition 'scsi-0QEMU_QEMU_HARDDISK_drive-scsi3-part1' successfully verified.
restore image complete (bytes=536870912, duration=9.02s, speed=56.78MB/s)
Partition 'scsi-0QEMU_QEMU_HARDDISK_drive-scsi3-part2' successfully verified.
restore image complete (bytes=16641950208, duration=115.89s, speed=136.95MB/s)
Partition 'scsi-0QEMU_QEMU_HARDDISK_drive-scsi3-part3' successfully verified.
restore image complete (bytes=1031168, duration=0.01s, speed=116.15MB/s)
Partition 'scsi-0QEMU_QEMU_HARDDISK_drive-scsi4-part1' successfully verified.
restore image complete (bytes=536870912, duration=8.50s, speed=60.21MB/s)
Partition 'scsi-0QEMU_QEMU_HARDDISK_drive-scsi4-part2' successfully verified.
restore image complete (bytes=16641950208, duration=120.13s, speed=132.11MB/s)
Partition 'scsi-0QEMU_QEMU_HARDDISK_drive-scsi4-part3' successfully verified.


root@TestPVE2Backup:~/scripts# lsblk
NAME                          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                             8:0    0   16G  0 disk
├─sda1                          8:1    0  512M  0 part /boot/efi
├─sda2                          8:2    0  488M  0 part /boot
└─sda3                          8:3    0   15G  0 part
  ├─TestPVE2Backup--vg-root   254:0    0   14G  0 lvm  /
  └─TestPVE2Backup--vg-swap_1 254:1    0  976M  0 lvm  [SWAP]
sdb                             8:16   0   32G  0 disk
├─sdb1                          8:17   0 1007K  0 part
├─sdb2                          8:18   0  512M  0 part
├─sdb3                          8:19   0 15.5G  0 part
├─sdb4                          8:20   0    2G  0 part
└─sdb5                          8:21   0   14G  0 part
sdc                             8:32   0   32G  0 disk
├─sdc1                          8:33   0 1007K  0 part
├─sdc2                          8:34   0  512M  0 part
├─sdc3                          8:35   0 15.5G  0 part
├─sdc4                          8:36   0    2G  0 part
└─sdc5                          8:37   0   14G  0 part
sdd                             8:48   0   40G  0 disk
├─sdd1                          8:49   0 1007K  0 part
├─sdd2                          8:50   0  512M  0 part
├─sdd3                          8:51   0 15.5G  0 part
├─sdd4                          8:52   0    2G  0 part
└─sdd5                          8:53   0   22G  0 part
sde                             8:64   0   40G  0 disk
├─sde1                          8:65   0 1007K  0 part
├─sde2                          8:66   0  512M  0 part
├─sde3                          8:67   0 15.5G  0 part
├─sde4                          8:68   0    2G  0 part
└─sde5                          8:69   0   22G  0 part
sr0                            11:0    1 1024M  0 rom
 
@Dunuin Your script above, does it still end up reading the entire LVM partition (part3), including the unused space when sending it to PBS? Same for restore?
 
Yes, the whole partition. But PBS does a good job at deduplicating and compressing unused space.
 
Thank you for clarifying. Just trying to wrap my head around how best to backup the server (single, no clusters).

During install, i selected the ext4 option which created a 96GB LV (root) and 783GB LV (data) on the 931GB (1TB) ssd. The latter of which is used for vm storage. If I'm understanding all of this correctly, then the backup process will be reading that entire 783GB including unused sections?

Code:
nvme0n1                       259:0    0 931.5G  0 disk
├─nvme0n1p1                   259:1    0  1007K  0 part
├─nvme0n1p2                   259:2    0   512M  0 part /boot/efi
└─nvme0n1p3                   259:3    0   931G  0 part
  ├─pve-swap                  252:0    0    20G  0 lvm  [SWAP]
  ├─pve-root                  252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta            252:2    0     8G  0 lvm
  │ └─pve-data-tpool          252:4    0   783G  0 lvm

There's more to the above, but that's the gist of it. Of that ~800GB, roughly 270GB is in use by vm's. When I run the backup client, will it be reading the used areas or unused also?
 
If I'm understanding all of this correctly, then the backup process will be reading that entire 783GB including unused sections?
Yes

During install, i selected the ext4 option which created a 96GB LV (root) and 783GB LV (data) on the 931GB (1TB) ssd. The latter of which is used for vm storage.
Thats one of the reasons why I prefer to have dedicated system disks (or at least partitions) and dedicated VM storage disks. So you don't have to backup all those VMs again you already got backups of, when backing up the PVE system.
So one option with single disk nodes would be to tell the PVE installer to only create a 32GB LVM/ZFS for the system and then later manually creating and formating another partition on that unallocated space to store your VMs. That way you could skip that VM storage partition and only backup the first 3 partitions (so 33GB) with the PVE system and bootloaders. After a restore you then would create/format that 4th partition again and restore your VMs from the PBS to that 4th partition.

When I run the backup client, will it be reading the used areas or unused also?
All areas.
 
  • Like
Reactions: das1996
Thanks for following up.

Looks like there's no good path forward other than starting over. I suppose running pve-backup-client from a bootable disk but using folder level backup (such as /), is the next best thing. Restoring would be a manual process, involving reinstalling packages and copying over the old config files to the right places. The resulting file isn't overly large and contains all the important bits except the vm's.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!