[SOLVED] Official way to backup proxmox VE itself?

No, I didn't know that but it's not a problem.
I continue to run PBS as well.

In any case I trust that the Veeam team will solve the problems, so my question remains valid.
 
Last edited:
No, I didn't know that but it's not a problem.
I continue to run PBS as well.

In any case I trust that the Veeam team will solve the problems, so my question remains valid.

I just feel like any backing while the system is running is not going to do you any good, the configuration is all in /etc/pve which is a mounted filesystem off a running database, see my post above on how to back up that alone. Whether you choose to backup all disks, disk or files, it won't guarantee you a consistent snapshot of that database.
 
Have done it twice, with slightly smaller (50-100mb?) NVME's & had no problem. Usually just the end will get truncated, which shouldn't really contain anything, assuming you started with an empty NVME for the original PVE install & hardly use the PVE OS disk for storage - as is my use case.
I find using an NMVE only to about 30 % of its capacity, greatly improves its longevity (as in years). This I believe is most cost-effective.

Hi there,
I recently tried restoring a 512GB ssd dd compressed image to a 256GB. The proxmox can boot up, but the lvm volumns can't be mounted.
Then I used gdisk to delete and create the lvm partition to the smaller size, now the proxmox can at least recongize the lv. However when I tried to resize the lv filesystem size it doesn't work, seems the empty lv space have been pre-allocated.

Probably I should shrink the partition and filesystem first, then do a dd (if dd method is still the way to go)

p.s. found this one should be helpful: https://blog.sensecodons.com/2017/03/shrinking-disk-to-migrate-to-smaller-ssd.html

update: unfortunately lvm-thin cannot be shrinked. A new thin-pool has to be re-created.

update2: I successfully do a 512GB->256GB 'migration', it's not smooth, watch out the bump:

1. after dd the 512GB img to the 256GB SSD, use gdisk to delete the lvm partition and then re-create a smaller one, watch out keeping the same UUID of the partition.
2.
Code:
lvremove /dev/pve/data
lvcreate -L 179G -n data pve
# 179G is calculated from lsblk the remaining after deducting pve-swap and pve-root on nvme0n1p3
lvconvert --type thin-pool pve/data
pvresize /dev/nvme0n1p3
reboot
from here the proxmox should recongnize the lvm-thin but nothing there.
3. cp the backup (.zst) to the new ssd's /var/lib/vz/dump/ , remove the old vms, restore the zst backup.
Done!
 
Last edited:
Hi there,
I recently tried restoring a 512GB ssd dd compressed image to a 256GB. The proxmox can boot up, but the lvm volumns can't be mounted.
Then I used gdisk to delete and create the lvm partition to the smaller size, now the proxmox can at least recongize the lv. However when I tried to resize the lv filesystem size it doesn't work, seems the empty lv space have been pre-allocated.

Probably I should shrink the partition and filesystem first, then do a dd (if dd method is still the way to go)

p.s. found this one should be helpful: https://blog.sensecodons.com/2017/03/shrinking-disk-to-migrate-to-smaller-ssd.html

update: unfortunately lvm-thin cannot be shrinked. A new thin-pool has to be re-created.

update2: I successfully do a 512GB->256GB 'migration', it's not smooth, watch out the bump:

1. after dd the 512GB img to the 256GB SSD, use gdisk to delete the lvm partition and then re-create a smaller one, watch out keeping the same UUID of the partition.
2.
Code:
lvremove /dev/pve/data
lvcreate -L 179G -n data pve
# 179G is calculated from lsblk the remaining after deducting pve-swap and pve-root on nvme0n1p3
lvconvert --type thin-pool pve/data
pvresize /dev/nvme0n1p3
reboot
from here the proxmox should recongnize the lvm-thin but nothing there.
3. cp the backup (.zst) to the new ssd's /var/lib/vz/dump/ , remove the old vms, restore the zst backup.
Done!
How could you dd to the smaller disk? Have you used sgdisk -R=/dev/target /dev/source to replicate the GPT before executing the dd command? I'd like to see the steps for that. In theory, what you described should not work as dd cannot manage to replicate the table that would be at the end of the target disk - as it is too small. Did you just create a new GPT with gdisk?
 
I continue to run PBS as well.
I'm here literally because I have given up on PBS. The Push datastore sync feature is apparently not quite as solid as I'd have hoped, as my local Debian/PBS LXC (installed 2022, upgraded continually to current Deb12/PBS 3.3.2) which has nearly 3yrs of backups over 5TB on it, all the while with a remote PBS doing a datastore pull sync, I just attempted to move the sync from pull to push (for easier/more straightforward management), but low and behold, found out that it only syncs the single LXC that was created since upgrading the local PBS LXC to the version that supports Push. I dug a bit into it and it seems that it may have backed up that LXC with "metadata" (or "data") chunk format rather than the "legacy" chunk format, and only that LXC will sync.

Meanwhile, during digging into why I can't get it to work, I noticed the current local Prune jobs are all showing as "Unknown" state since the last PBS Client/Server upgrades for me back on November 14. While /var/log/whatever shows the operation as finishing, the finished prune never gets to the GUI:

last good prune-Screenshot 2024-12-28 020322.jpg

I did a local clone of the target PBS datastore, so decided to try and just prune all the backups on the target PBS, but then even after garbage collection (which took like 8hrs) the 6TB datastore showed 1.8TB in use. WTHeck ? Back to the "Unknown" prune job status in the gui logs... I thought it was an instrumentation issue that would prob be fixed in a later PBS, but now with only the backups of the LXC created since Nov 14th able to be "Push Sync"ed to the remote datastore (it was successfully Pull-Sync'ing the entire dataset prior to this reconfigure attempt), the remote datastore holding over 3TB of phantom blocks, and both machines grinding away to simply "shuttle" this stuff a hundred miles over GigE, I'm starting to really question 3.3's stability.

With the remote sync so massively resource intensive on both backups as well as syncing, I've decided to finally stem the bleeding (and eliminate the extra 2KWh of intense grinding compute power/night needed to remotely sync the PBS datastores on these old servers), I'm just gonna move all the off-site replicate to a hand-rolled zfs-send solution like I had been threatening myself to do for 2yrs now. The zfs-send (just hand-done for now as a test) is infinitely faster and takes nearly no CPU power (origin dataset that the local PBS datastore is on is set to compression=zle and recordsize=1M since PBS chunks are all zstandard compressed already and bell curve of my PBS chunk sizes over 2MB).

It would really be elegent if PVE had a "qm remote-copy <VMID>" type of commmand, or even better, lift the PBS sync GUI interface and repurpose it in PVE as a "CT/VM Push Sync" feature. That would be something ! And for PBS, give users with ZFS-backed datastores, the option to dispense with this whole cludgey native PBS "sync", instead offering the option of native "datastore sync" using zfs-send on the back-end. Obviously it would require the "Remove vanished" option of "Use ZFS replication" option is used, but it would infinitely make some of our lives better for it.

I'm gonna try znapzend first, as it seems like it'll do all that's necessary. I'll report back here how things go.

Thanks for coming to my Ted Talk ;)

-=dave
 
  • Like
Reactions: patch
Maybe I found a good compromise and I would like to discuss it with you.

From what I understand you recommend redoing the server and related configurations, saving only some folders in /etc to restore in the new installation.

In my case I have a pair of disks in raid1 zfs dedicated to the operating system only. The VMs and CT are on a dedicated disk.

If I use Veeam agent for linux to perform a file-level backup of the entire root, I should be able to restore the state of the server at the time of the backup.
I should only perform a new clean installation of proxmox, install the veeam agent, hook it up to veeam and, through the veeam console, restore all the files.

Then I should only restore the VMs and CT via PBS/Veeam for Proxmox.

is this correct?

I opened a specific discussion on the Veeam forum:
https://forums.veeam.com/kvm-rhv-olvm-proxmox-f62/backup-proxmox-os-on-raid1-zfs-t96979.html

I'm not entirely clear about the suggested function: rootRecursion = false
I'm having a hard time understanding what changes with this change to the veeam.ini file
 
Maybe I found a good compromise and I would like to discuss it with you.

From what I understand you recommend redoing the server and related configurations, saving only some folders in /etc to restore in the new installation.

In my case I have a pair of disks in raid1 zfs dedicated to the operating system only. The VMs and CT are on a dedicated disk.

If I use Veeam agent for linux to perform a file-level backup of the entire root, I should be able to restore the state of the server at the time of the backup.
I should only perform a new clean installation of proxmox, install the veeam agent, hook it up to veeam and, through the veeam console, restore all the files.

Then I should only restore the VMs and CT via PBS/Veeam for Proxmox.

is this correct?

I opened a specific discussion on the Veeam forum:
https://forums.veeam.com/kvm-rhv-olvm-proxmox-f62/backup-proxmox-os-on-raid1-zfs-t96979.html

I'm not entirely clear about the suggested function: rootRecursion = false
I'm having a hard time understanding what changes with this change to the veeam.ini file
Hi All

Just curious why not just use PBS Linux client to backup the root OS files in the same way the Veeam client is being put forward.

PBS client can do a file only backup/ restore.

What am I missing here?

""Cheers
G
 
  • Like
Reactions: Johannes S
PBS only backs up VMs and CTs inside the datacenter. Not Hosts.

Hi @Mario Rossi


According to the documentation is also covers bare metal when using the PBS client.
What have i missed here?

https://pbs.proxmox.com/docs/backup-client.html

Creating Backups​


This section explains how to create a backup from within the machine. This can be a physical host, a virtual machine, or a container. Such backups may containfile and image archives. There are no restrictions in this case.
 
  • Like
Reactions: Johannes S
PBS client can do a file only backup/ restore.

What am I missing here?
It definitely can - but that will only be a file-backup, it won't be a block/image backup so you will not have the full partition/s structure. So even though a backup of the /etc folder etc. of your Proxmox host will be useful in reconstructing your setup, it is not a full backup of your boot/OS. This will contrast with a VM/LXC backup that can be a complete backup solution.
According to the documentation is also covers bare metal when using the PBS client. What have i missed here?
Dealt with above.
 
@gfngfn256 thanks for the info , i was replying to Marios statement about using Veeam to backup the files /rpool and using PBS for the rest.

I cant see any benefit of using 2 different systems to backup /rpool.

with that said just backing up the correct files will give the admin a good head start and save time especially when its a complex network setup.
 
  • Like
Reactions: gfngfn256