[SOLVED] Official way to backup proxmox VE itself?

Partclone is a nice tool to do partition backups, as it only writes used blocks to the backup file.
So a 6TB disk that only contains 100MB of data will produce a 100MB file instead of a 6TB file.

Downside is, that you would have to boot your server from a live medium because mounted partitions can't be backuped.

I guess most people don't change a lot in the hosts config after setting is up so using this every x months can be an option (if the system is allowed to have planed downtime).

https://github.com/Thomas-Tsai/partclone
 
> Downside is, that you would have to boot your server from a live medium because mounted partitions can't be backuped

Umm, I've been backing up and restoring my rootfs for years with fsarchiver, and zero problems...
 
Have you tried it the Veeam Backup & Replication server? I'm using the Veeam Windows agent here backing up to SMB but this one is basically a no-go without that Veeam Backup & Replication server. I don't want to rely on backups where the client is able to wipe all my backups because it needs to be able to overwrite/delete old backup images for compacting/merging/pruning maintainance tasks. So no ransomware protection. And while the backups don't take that long, the maintainance tasks do and the client has to do them. Always annoying if I can't shut that Windows client down because it needs another 3 hours for that after the weekly backup. Here it would also be useful if a server could do that on its own at night.
Looks like that could be solved by using that Veeam Backup & Replication server but I had a look at the minimum system requirements and those are pretty bad. If I remember it correctly, it needs Windows (no full Linux support) and lots of RAM and CPU (actually more demanding than the Windows client I want to backup...). So sounds like it is only worth it with lots of client and not if you only need to backup 2 or 3 machines at home.

Did you try bacula or borg for bare metal windows/linux client backups?
Wanted to try those in the near future as their backup server is less demanding and could run on linux.
Any recommendation?

You need to spend some more time working with Veeam. I've used it for over 10 years at work and use it for bare metal restore at home and nothing you are describing sounds like it. Especially the heavy requirements. It doesn't require much processing at all, just put it in a vm with 2 procs and 8Gb of ram. Our entire infrastructure is running off of an old Broadwell processor vm 6 core 2.6Ghz and 8GB of ram using 80GB of hard drive space. That's for 35vms including Exchange and several SQL servers. That's doing 2 backups daily, each to different locations, and replication twice a day.

The "wipe all my backups because".... Those are all settings you make depending on your needs, space requirements, speed requirements, and retention policies. The maintenance? Veeam assumes these are servers on 24hrs. There is nothing wrong with shutting down your server, you don't have to wait for it to finish. You can adjust maintenance for any time of day, any days of week, or months, and you can choose what type and kinds of maintenance you want to perform. Or none at all.

No ransomware protection, not sure where you got that. Your choice depends on your backup infrastructure. Backup to USB, tape, aws, or anywhere. The ransomware protection comes from encryption which it has, and if you air gap it, or do aws tape with write once. You can also take regular snapshots and securely store them in an isolated environment (which Veeam helps you set up)
 
Last edited:
  • Like
Reactions: Kingneutron
I boot a Debian from a pen drive and then use the proxmox-backup-client to backup the 3 system partitions on block level to my PBS. Thats incremental and using deduplication, so not that space consuming.

And you can also tell the PVE installer to only create something like a 32GB rpool. Later you could partition the unallocated space with another partition and use that for another ZFS pool. Clonezilla then could only backup the first 3 partitions so 33GB.
Can you tell us more about your workflow. I want to make an image of my entire proxmox boot drive that I can easily restore to the same drive or another drive and simply reboot the system and have it working again. I tried using rescuezilla to do this.. just choosing backup and selecting the drive.. but it took a 500G drive and "backed it up" and the result is only 3G. This doesn't seem anywhere near accurate as the disk has lvm thin pools on it and within those pools there is definitely about 100G of "real data" on the disks. It's as if rescuezilla is not aware of this thinpool and doesn't even bother to back it up? Any advice is most definitely welcome!
 
You will have to script something to your needs if you want to backup the system disks to PBS. See my example here: https://forum.proxmox.com/threads/pbs-client-grub-and-just-backing-up-partitions.105990/#post-568579
Thank you for the examples. For now, I’m virtualizing PBS on one of my proxmox nodes just to backup CMs/CTs. I’ll have to figure out a suitable way to simple image the entire disk and be able to restore it; block by block and reliably. From what I can tell Rescuezilla in its default form (just clicking backup and following that flow) will not accomplish that. I’ll have to keep reading and I’ll need to eventual get a box just for PBS (and still keep the datastore on a samba share on my synology).
 
For a homelab, you shouldn't need a mirror/RAID1 disk for the OS. Backup your root frequently, have a spare disk handy in case you need to restore/recreate. With proper backups and a TESTED DR procedure, you should be able to get back up and running in ~2 hours or less.

RAID1 for the OS is more for datacenters and prod-critical environments where they can't afford downtime.
why shouldn't you have raid1 for redundancy? If something like home assistant or a nas is running on that host any amount of downtime can be an issue...what is the reasoning against this? Plus its not like you can always drop everything and fix it in 2 hours what if you are away and someone (SO) is at home needing to use the services on the host... this could create an issue im sure. What if pihole goes down and basically kills internet for everyone on the network(who cant change dns). Redundancy just makes life easier so I am really wondering what the issue is... I am NOT saying you are wrong, I just want to know why.
 
  • Like
Reactions: markc
It's even better to run two PVE nodes and have one pihole running on each of them. That way not only the storage is redundant but the whole server. ;) See for example here: https://forum.proxmox.com/threads/pi-hole-lxc-with-gravity-sync.109881/#post-645646
Add a SBC as a qdevice for a cluster and that NAS and home assistant could be fully redundant too.
I have one better on dual nodes and run pihole on a k3s cluster in HA via 3 mini pcs running pve with each hosting a k3s node and running longhorn. I could have increased my redundancy by running raid 1 but its already redundant and that was a extra cost and and would use the only other available m2 port that I want to use for a tpu. Still at a later date I might change to raid 1 on at least some the mini pcs as I guess I only need 1 node to have a npu for frigate.
 
Is there any official guide on how to backup proxmox VE itself?

My main concern is the amount of work that I have put in to make the current setup work, e.g. GPU passthrough etc.
If that breaks due to updates etc, I would like to have a way to quickly revert back to the last known working state.

Is Proxmox backup server able to do it? and is it the only way?
Or is there some way that is built-in to Proxmox VE itself?
Lots of ways to skin a cat here...The method I currently use to "backup my pve node config" is to keep a simple text note of all the specialized scripts or modifications to the pve node that I do to alter after fresh install that results in passthrough and other hardware working (all my qm set commands for hard drives), specific network config, and other specific info related to your computing environment. I basically write myself a guide like the passthrough guide but tailored to the way my brain works and understands things so I dont have to re-read the wiki again and again. In the event of a loss you have your notes on how to make your newly purchased hardware work again in the way you wish for your application, however its still a manual process to re-deploy to the new install of proxmox. It will however be much easier the second time around with your notes to follow.

Another method I have heard is backup of the root directory (all those /etc files and stuff) as a host backup or file backup on your backup server location. Then you would "mount" the file system and cp back all the config files on a fresh install. Though...I dont know if i would suggest this...because if your loss of a pve node is probably a hardware issue that then you would still need to purchase new and different hardware to get it the node back up and running, so imo maybe a notebook is still better cause you dont want config files referencing old harddrive serial numbers, that type of thing. When you plug new hardware in, the computer assigns it a different address in some config, sda vs sdb etc. that makes the copy over from your old system more frustrating to find all the spots in the config where you need to replace sda with sdb....

If you want "full auto" deployments like "infrastructure as code", you need to learn ansible and terraform to provision your machines "automagically". I am still personally learning alot about these things for my own homelab. I started with docker-compose for my containers so my configured services in my VMs and containers can be rebuilt very quickly if there are errors or failures. I run zfs everywhere. Learning all the zfs replication tasks. I currently set up with proxmox backup server (PBS) on a separate box so my nodes can restore any vm or container from my snapshots that are taken every 30 min. My plan in the future is to learn terraform with ansible to have a robust homelab that can be torn down and rebuilt in a very short amount of time. I am pretty far off from fully understanding how terraform (provision the new equipment) and ansible (configure OS/hypervisor softwares) do their things though...good luck in your journey!
 
In the event of a loss you have your notes on how to make your newly purchased hardware work again in the way you wish for your application, however its still a manual process to re-deploy to the new install of proxmox. It will however be much easier the second time around with your notes to follow.
Yes, and the great thing about it is that you can adjust to a newer PVE version or new hardware. Where a deploy script made for PVE 8 maybe won't work with PVE9 anymore without spending a lot of time keeping your redeploy scripts always updated. I've got a DokuWiki container for this where I write everything down I do. Great thing is that DokuWiki stores everything as well readable plain textfiles. So even with none PVE node working I could always have a look at the backups of those textfiles to have a look how to get those nodes working again.

Another method I have heard is backup of the root directory (all those /etc files and stuff) as a host backup or file backup on your backup server location. Then you would "mount" the file system and cp back all the config files on a fresh install.
Yes but you can't overwrite the whole "/etc" folder. Only specific files or sometimes even specific lines of a file. There are some github scripts that backup and restore some of those files. But if you use such scripts and don't write your own script, you maybe miss to backup important config files in case you are not running a stock PVE (like missing NUT configs in case you use an UPS, postfix config for mail notifications, hook scripts, ...).

If you want "full auto" deployments like "infrastructure as code", you need to learn ansible and terraform to provision your machines "automagically". I am still personally learning alot about these things for my own homelab. I started with docker-compose for my containers so my configured services in my VMs and containers can be rebuilt very quickly if there are errors or failures. I run zfs everywhere. Learning all the zfs replication tasks. I currently set up with proxmox backup server (PBS) on a separate box so my nodes can restore any vm or container from my snapshots that are taken every 30 min. My plan in the future is to learn terraform with ansible to have a robust homelab that can be torn down and rebuilt in a very short amount of time. I am pretty far off from fully understanding how terraform (provision the new equipment) and ansible (configure OS/hypervisor softwares) do their things though...good luck in your journey!
For bigger environments a great way. Not sure how practical this is for a homelab. If downtime isn't a major concern and you don't got dozens of the same PVE nodes or VMs, writing those playbooks and keeping them always up-to-date might actually consume way more time than simply manually reinstalling a PVE node from scratch.
 
Last edited:
I just would like to chronicle my experience(s) with PVE host backup.

I regularily use this script DerDanilo script with some modifications (removed --one-file-system (twice) from script, changed backup dir & max backups to 20). I find it useful for a complete record of all system changes made. I HAVE NEVER USED IT TO DIRECTLY RESTORE ANYTHING.

As far as complete block-level backup of host goes, I tried clonezilla with limited success, (I believe it doesn't handle pve-partitions well). I HAVE NEVER SUCCESSFULLY RESTORED A PVE HOST WITH clonezilla!

Instead I just boot from a live media (I use my own customized SystemRescue; SSH enabled, following commands stored in bash_history) and do the following:

Code:
mount /dev/xxx /mnt
#(mount storage device)

tmux
#(to enable leaving the process running)

dd if=/dev/YYY bs=32M status=progress conv=sync,noerror | gzip -c > /mnt/prxmx_host$(date +'%Y_%m_%d_%I_%M_%p').img.gz
#(YYY is PVE system os disk)

I HAVE USED THIS (ABOVE CODE - MODIFIED TO REVERSE) PROCESS FOR A COMPLETE SUCCSESSFUL MIGRATION TO A NEW HOST!

I make a complete block-level backup before any major update/system change.

I have a 512gb nvme system disk, (contains local & lvm, but hardly used as I store VM's CT's etc. on a different storage disk) and backup time takes thirty-something minutes (yes - that's downtime needed!), and produces a zipped file of approximately 7-8 GB.

I know I could try PBS for block-level plus deduplication etc. but cant be bothered to build another host for this. (I run all backups, vzdump to a local disk & rsync them to another external usb disk from time to time)

Anyway that's my host-backup/restore experience plus rant, and yes, for such a robust, sophisticated & well-build PVE system (thanks to the excellent Proxmox team!):
I REALLY WOULD EXPECT PVE TO HAVE AN IN-HOUSE SOLOUTION!

Thank you for your sharing!

I have used this method to duplicate PVE with all VMs to more than 5 PCs successfully.

However I would expect there is a limitation: the storage size in the new PC has to be not less than the original PC. Tricky is that different brands, different batches have slight different actual size of `usable storage` size for a same `labelled storage`. It may not work if the new storage is smaller, eventhough 1kb less.
 
It may not work if the new storage is smaller, eventhough 1kb less.
Have done it twice, with slightly smaller (50-100mb?) NVME's & had no problem. Usually just the end will get truncated, which shouldn't really contain anything, assuming you started with an empty NVME for the original PVE install & hardly use the PVE OS disk for storage - as is my use case.
I find using an NMVE only to about 30 % of its capacity, greatly improves its longevity (as in years). This I believe is most cost-effective.
 
Have done it twice, with slightly smaller (50-100mb?) NVME's & had no problem. Usually just the end will get truncated, which shouldn't really contain anything, assuming you started with an empty NVME for the original PVE install & hardly use the PVE OS disk for storage - as is my use case.
I find using an NMVE only to about 30 % of its capacity, greatly improves its longevity (as in years). This I believe is most cost-effective.

Really? Good to know and I will test on some slightly smaller size disk if I have.

So the partition table can have some kind of `self-healing` magic that automatically adapts the smaller disk? Just out of curious ...
I don't have much experience with the lvm things, in my previous experience with ext4, if the end sector is not found (because the disk is smaller) it won't boot at all.
 
Have done it twice, with slightly smaller (50-100mb?) NVME's & had no problem. Usually just the end will get truncated, which shouldn't really contain anything, assuming you started with an empty NVME for the original PVE install & hardly use the PVE OS disk for storage - as is my use case.
I find using an NMVE only to about 30 % of its capacity, greatly improves its longevity (as in years). This I believe is most cost-effective.

Why would you ever want to DD copy a Linux install?

I mean what's wrong with:
Code:
sgdisk --backup=/mnt/backup/GPTHEADERBACKUP /dev/sdX
sudo rsync -a -x / /mnt/backup

Then on LIVE boot, just:

Code:
sgdisk --load-backup=/mnt/backup/GPTHEADERBACKUP /dev/sdX
# here create filesystems and mount your partitions under /mnt/target
for i in /dev /proc /run /sys; do mount --bind "$i" "/mnt/target$i"; done
chroot /mnt/target
# if needed update /etc/fstab with new UUIDs (if you do not use PARTLABALELs), especially EFI
update-grub
# or grub-install /dev/sdX

WARNING: Typos above are possible. :)
 
Because of the length/complication of all you are doing in your post! With DD I've got an actual compressed backup of the whole drive.

Well, I only tried to put into context the single rsync / cp -a command.

By your own admission:
Code:
dd if=/dev/YYY bs=32M status=progress conv=sync,noerror | gzip -c > /mnt/prxmx_host$(date +'%Y_%m_%d_%I_%M_%p').img.gz
#(YYY is PVE system os disk)


I have a 512gb nvme system disk, (contains local & lvm, but hardly used as I store VM's CT's etc. on a different storage disk) and backup time takes thirty-something minutes (yes - that's downtime needed!), and produces a zipped file of approximately 7-8 GB.

This takes half an hour (on 512G) and is ~8GB, while the install itself is <3GB uncompressed (subsequent rsyncs will be by its nature very fast). But most importantly, you may want to deploy such backup onto much smaller (or simply different sized) drive.

My "complication" is simply in the fact that whatever new drive should be partitioned (the same way a normal install would expect). Under normal circumstances, I simply take sgdisk script from what I use anyhow and apply on any drive.
 
Hi, I have a pc with Veeam 12.2.0.334 that has acquired the ability to backup Proxmox VMs.
I have been successfully using veeam to backup and restore VMs in Proxmox.

I installed the agent for backup of linux machines following this guide: https://forums.veeam.com/veeam-agen...ox-incremental-backups-with-veeam-t66702.html

My Proxmox host has 3 ssd.
2 ssd are sata in raid1 zfs, the third is nvme and contains all the VMs and CT.

I need the easiest way to backup and restore the proxmox host with veeam.

Veeam allows you to backup the entire pc, entire disks or individual files.
The first option doesn't seem right to me, I don't know if the second one would be good given the zfs disks. I don't know which folders to backup for the third one.

Then there's always the problem of restoring.
The most convenient thing would be to start the PC with a pendrive that connects to the network and takes the data from the Veeam server and restores it on the PC.
 
Hi, I have a pc with Veeam 12.2.0.334 that has acquired the ability to backup Proxmox VMs.
I have been successfully using veeam to backup and restore VMs in Proxmox.

I installed the agent for backup of linux machines following this guide: https://forums.veeam.com/veeam-agen...ox-incremental-backups-with-veeam-t66702.html

My Proxmox host has 3 ssd.
2 ssd are sata in raid1 zfs, the third is nvme and contains all the VMs and CT.

I need the easiest way to backup and restore the proxmox host with veeam.

Veeam allows you to backup the entire pc, entire disks or individual files.
The first option doesn't seem right to me, I don't know if the second one would be good given the zfs disks. I don't know which folders to backup for the third one.

Then there's always the problem of restoring.
The most convenient thing would be to start the PC with a pendrive that connects to the network and takes the data from the Veeam server and restores it on the PC.

Are you aware of:

https://forum.proxmox.com/threads/veeam-silent-data-corruption.155212/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!