Proxmox Backup Server 3.4 released!

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
6,517
3,807
303
South Tyrol/Italy
shop.proxmox.com
Thanks to our amazing community, we’re rolling out our newest software update for Proxmox Backup Server 3.4. Your feedback has been instrumental in shaping these improvements.

This version is based on Debian 12.10 ("Bookworm") but uses the Linux kernel 6.8.12-9 as stable default and kernel 6.14 as opt-in, and comes with ZFS 2.2.7 with compatibility patches for Kernel 6.14.

Here are the highlights
  • Optimized performance for Garbage Collection
  • Granular backup snapshot selection for sync jobs
  • Easier backups of arbitrary Linux hosts with new Static build of CLI client
  • Increased throughput for tape backup
  • Countless improvements for better performance and usability
You can find all details in the full release notes and we are looking forward to your feedback!

Release notes
https://pbs.proxmox.com/wiki/index.php/Roadmap#Proxmox_Backup_Server_3.4

Press release
https://www.proxmox.com/en/about/company-details/press-releases/proxmox-backup-server-3-4

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso

Documentation
https://pbs.proxmox.com/docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

Thanks for your contributions, ideas, and support — with your insights, we’ve introduced meaningful updates to enhance your experience.

FAQ
Q: Can I upgrade the latest Proxmox Backup Server 2.x to 3.4 with apt?
A: Yes, please follow the upgrade instructions on https://pbs.proxmox.com/wiki/index.php/Upgrade_from_2_to_3

Q: How does this integrate into Proxmox Virtual Environment?
A: Just add a Proxmox Backup Server datastore as a new storage target in your Proxmox VE. Make sure that you run the latest Proxmox VE 8.4.

Q: Is Proxmox Backup Server still compatible with older clients or Proxmox VE releases?
A: We are actively testing compatibility between all currently supported major versions, including the previous version. Full compatibility with even older (major) client versions is supported only on a best effort basis.

Q: How do I install the proxmox-backup-client on my Debian or Ubuntu server?
A: We provide a "Proxmox Backup Client-only Repository", see https://pbs.proxmox.com/docs/installation.html#client-installation

Q: What will happen with the existing backup tool (vzdump) in Proxmox Virtual Environment?
A: You can still use vzdump. The new backup is an additional, but very powerful way to backup and restore your VMs and containers.

Q: Is there any recommended server hardware for the Proxmox Backup Server?
A: We recommend enterprise-grade server hardware components, with fast local SSD/NVMe storage. Access and response times from rotating drives will slow down all backup server operations. See https://pbs.proxmox.com/docs/installation.html#recommended-server-system-requirements

Q: Can I install Proxmox Backup Server on Debian, in a VM, as LXC or alongside with Proxmox VE?
A: Yes, but all this is not the recommended setup (expert use only).

Q: Where can I get more information about upcoming features?
A: Follow the announcement forum and pbs-devel mailing list https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel, and subscribe to our newsletter https://www.proxmox.com/news and see https://pbs.proxmox.com/wiki/index.php/Roadmap.
 
Last edited by a moderator:
Hi congrats for this nice update :)
Has there been any progress so far about the "Proxmox VE host backup" feature request that is listed in the Roadmap for quite some time now?
 
Hi congrats for this nice update :)
Has there been any progress so far about the "Proxmox VE host backup" feature request that is listed in the Roadmap for quite some time now?
No, there is no ETA for PVE host backups.
 
I think file level restore is currently broken
See this thread.

"Proxmox VE host backup" feature request
FWIW we use a version of this script from Reddit. As noted, comment out the "tar" lines, add "--include-dev /etc/pve" and the fingerprint is necessary only if one isn't using a valid cert+hostname. It at least gets all the info. If stored in /etc/pve/ it can be run as "bash /etc/pve/script.sh".
 
Last edited:
Hi,

I upgraded our PBS from 3.3 to 3.4 and added the following tuning to our datastores GC :

Code:
tuning chunk-order=none,gc-atime-cutoff=30,gc-atime-safety-check=1,gc-cache-capacity=8388608

I'm happy to report that our GC time went from 1h30-2h to 27mn on our SSD PBS and from 45-50mn to 15mn on our NVME PBS, so a consistent three time faster!

Congratulations to the PBS dev team :)

Note: I also added to our tape jobs:

Code:
worker-threads 8

It seems to be a bit faster both on our LTO9 and our LTO7 tape libraries, but tape run time is noisy and it's less spectacular gain than the GC :)

A few questions on the parameter gc-cache-capacity parameter documented as "Garbage collection chunk digest cache capacity" and with max setable value of 8388608.

The increase in RAM usage during GC this morning was less than 2GB according to the "Memory Usage" graph of our PBS, for a total of 4GB of memory used. We have 256 GB of RAM on our PBS server and AMD EPYC processor.

Is there a reason to limit this parameter to 8 millions ? We have 251 millions chunks on our datastore, looks like all their chunk id would fit in RAM ?

Also during the GC there is a lot of disk write activity, I assume it comes from setting atime on the 251 millions inodes of chunks. Could this be replaced with all in RAM information in the PBS server process? 251 millions by 32 bytes of chunk id is only 8 Gbytes of RAM.

Statistics from one of our datastore :

Code:
2025-04-11T07:27:52+02:00: On-Disk usage: 55.054 TiB (2.77%)
2025-04-11T07:27:52+02:00: On-Disk chunks: 25150968
2025-04-11T07:27:52+02:00: Deduplication factor: 36.05
2025-04-11T07:27:52+02:00: Average chunk size: 2.295 MiB

Thanks again!

Note : we have basic support for our PBS servers, let me know if you want us to open a bugzilla or support request.
 
Last edited:

From that page:

This package conflicts with the proxmox-backup-client package, as both provide the client as an executable in the /usr/bin/proxmox-backup-client path.

You can copy this executable to other, e.g. non-Debian based Linux systems.

Wouldn't it make sense to also provide a download as a tar.gz or tar.zst file or even just the binary? I mean if somebody wants to use the backup-client to do a backup of a non-debian-based system it would be counter-intuitive to first setup a debian system.
 
Hi,

I upgraded our PBS from 3.3 to 3.4 and added the following tuning to our datastores GC :

Code:
tuning chunk-order=none,gc-atime-cutoff=30,gc-atime-safety-check=1,gc-cache-capacity=8388608

I'm happy to report that our GC time went from 1h30-2h to 27mn on our SSD PBS and from 45-50mn to 15mn on our NVME PBS, so a consistent three time faster!

Great to hear, the gains depend of course on the datastore layout and contents, but we saw similar levels of improvements during testing as well.

Congratulations to the PBS dev team :)

Note: I also added to our tape jobs:

Code:
worker-threads 8

It seems to be a bit faster both on our LTO9 and our LTO7 tape libraries, but tape run time is noisy and it's less spectacular gain than the GC :)

A few questions on the parameter gc-cache-capacity parameter documented as "Garbage collection chunk digest cache capacity" and with max setable value of 8388608.

The increase in RAM usage during GC this morning was less than 2GB according to the "Memory Usage" graph of our PBS, for a total of 4GB of memory used. We have 256 GB of RAM on our PBS server and AMD EPYC processor.

Is there a reason to limit this parameter to 8 millions ? We have 251 millions chunks on our datastore, looks like all their chunk id would fit in RAM ?

Also during the GC there is a lot of disk write activity, I assume it comes from setting atime on the 251 millions inodes of chunks. Could this be replaced with all in RAM information in the PBS server process? 251 millions by 32 bytes of chunk id is only 8 Gbytes of RAM.

The issue is that further increasing the maximum cache capacity will not necessarily equal to reduced runtime for garbage collection, see [0]. The obtained improvement is also due to a change on how the garbage collection iterates over the index files during phase 1. This now better takes the logical structure of the datastore into account, as there is a stronger correlation of chunks within the same backup group (reuse of previous snapshot, identical data chunks).

You could try to identify via strace -f -e utimensat -p $(pidof proxmox-backup-proxy) which chunks are touched during phase 1 and see if there are a lot of duplicates in your case. If you still see multiple updates on the same chunk files with the current maximum cache capacity limit then yes, I encourage you to open an enhancement request in our bugzilla [1].

Statistics from one of our datastore :

Code:
2025-04-11T07:27:52+02:00: On-Disk usage: 55.054 TiB (2.77%)
2025-04-11T07:27:52+02:00: On-Disk chunks: 25150968
2025-04-11T07:27:52+02:00: Deduplication factor: 36.05
2025-04-11T07:27:52+02:00: Average chunk size: 2.295 MiB

Thanks again!

Note : we have basic support for our PBS servers, let me know if you want us to open a bugzilla or support request.

[0] https://lore.proxmox.com/pbs-devel/fa3800dd-e812-4c9a-9d3d-2d8673e05355@proxmox.com/
[1] https://bugzilla.proxmox.com/
 
Last edited:
From that page:



Wouldn't it make sense to also provide a download as a tar.gz or tar.zst file or even just the binary? I mean if somebody wants to use the backup-client to do a backup of a non-debian-based system it would be counter-intuitive to first setup a debian system.
This will still be improved upon, we also plan to include a mechanism to keep the backup client up to date, see https://bugzilla.proxmox.com/show_bug.cgi?id=4788#c11
 
  • Like
Reactions: Johannes S
Great to hear, the gains depend of course on the datastore layout and contents, but we saw improvements during testing as well.



The issue is that further increasing the maximum cache capacity will not necessarily equal to reduced runtime for garbage collection, see [0]. The obtained improvement is also due to a change on how the garbage collection iterates over the index files during phase 1. This now better takes the logical structure of the datastore into account, as there is a stronger correlation of chunks within the same backup group (reuse of previous snapshot, identical data chunks).

You could try to identify via strace -f -e utimensat -p $(pidof proxmox-backup-proxy) which chunks are touched during phase 1 and see if there are a lot of duplicates in your case. If you still see multiple updates on the same chunk files with the current maximum cache capacity limit then yes, I encourage you to open an enhancement request in our bugzilla [1].



[0] https://lore.proxmox.com/pbs-devel/fa3800dd-e812-4c9a-9d3d-2d8673e05355@proxmox.com/
[1] https://bugzilla.proxmox.com/

On this first GC I have a proxy measure as I was running iostat -xmt 10 during the GC. Measures varied from 10 MB/s to 70 MB/s write with a ballpark average of 20 MB/s. That means 27 minutes * 20 MB/s equals around 32 Gbyte of writes, and so 1300 bytes written per chunk on average through our ext4 filesystem (4k block size, but I don't know how inodes are packed and flush policy).

There are 293 groups (only VM) 9697 snapshots and dedup factor of 36.05. Our prune policy is 7 last, 21 daily, 3 weekly, 3 monthly so dedup is roughly in line with the depth in number of snapshots per VM.

Reading [0] I think it's coherent with the few gigabytes or RAM I observed at system level, on your test repo of 3 TB with dedup factor 43 you're in the < 1 GB RAM range, our repo is roughly 20 times larger with a similar (36 vs 43) dedup factor.

I'll try to run strace tomorrow morning and report back.
 
  • Like
Reactions: waltar
Amazing! Congratulations!

I will go ahead and test if their are performance improvements in the use case when there are a large amount of backups in a single namespace. In version 3.3, the listing of these backups take way, way, way too long and they also do not show up in PVE (in the backup section) as a result of this.