Proxmox Backup Server 4.2 released!

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
6,822
5,118
315
South Tyrol/Italy
shop.proxmox.com
We're pleased to announce the release of Proxmox Backup Server 4.2.

This version is based on Debian 13.4 ("Trixie"), uses Linux kernel 7.0 as the new stable default for improved hardware support, and comes with ZFS 2.4 for reliable, enterprise-grade storage.

Here are the highlights
  • Move groups and namespaces within a datastore for easier backup reorganization
  • Server-side encryption/decryption for sync jobs
  • Improved sync performance with concurrent group processing ("worker-threads")
  • S3-compatible object stores supported as backup storage backend, with request statistics and notifications
  • Numerous performance, usability, and backend improvements across the stack
  • and much more...
You can find all details in the full release notes, and as always, we're really looking forward to your feedback and experiences with Proxmox Backup Server 4.2!

Release notes
https://pbs.proxmox.com/wiki/Roadmap#Proxmox_Backup_Server_4.2

Press release
https://www.proxmox.com/en/about/company-details/press-releases/proxmox-backup-server-4-2

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso

Documentation
https://pbs.proxmox.com/docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

Thanks for your contributions, ideas, and support — with your insights, we've introduced meaningful updates to enhance your experience.

FAQ
Q: Can I upgrade the latest Proxmox Backup Server 3.x to 4.2 with apt?
A: Yes, please follow the upgrade instructions on https://pbs.proxmox.com/wiki/Upgrade_from_3_to_4

Q: How does this integrate into Proxmox Virtual Environment?
A: Just add a Proxmox Backup Server datastore as a new storage target in your Proxmox VE. Make sure that you run the latest Proxmox VE 9.x

Q: Is Proxmox Backup Server still compatible with older clients or Proxmox VE releases?
A: We are actively testing the compatibility of all the major versions currently supported, including the previous one. This means that you can safely back up from Proxmox VE 8 to Proxmox Backup Server 4, or from Proxmox VE 9 to Proxmox Backup Server 3. However, full compatibility with major client versions that are two or more releases apart, like for example Proxmox VE 7 based on Debian 11 Bullseye and Proxmox Backup Server 4 based on Debian 13 Trixie, is supported on a best-effort basis only.

Q: How long will Proxmox Backup Server 3.4 receive bug fixes and security support?
A: Proxmox Backup Server 3.4 will receive security updates and critical bug fixes until August 2026. This support window provides an overlap of approximately one year after the release of Proxmox Backup Server 4, giving users ample time to plan their upgrade to the new major version.
For more information on the support lifecycle of Proxmox Backup Server releases, please visit:
https://pbs.proxmox.com/docs/faq.html#how-long-will-my-proxmox-backup-server-version-be-supported

Q: How do I install the proxmox-backup-client on my Debian or Ubuntu server?
A: We provide a "Proxmox Backup Client-only Repository". See https://pbs.proxmox.com/docs/installation.html#client-installation
For Debian derivatives we recommend installing the proxmox-backup-client-static package to avoid issues with different system library versions.

Q: What will happen with the existing backup tool (vzdump) in Proxmox Virtual Environment?
A: You can still use vzdump. The new backup is an additional, but very powerful way to back up and restore your VMs and containers.

Q: Is there any recommended server hardware for the Proxmox Backup Server?
A: We recommend enterprise-grade server hardware components, with fast local SSD/NVMe storage. Access and response times from rotating drives will slow down all backup server operations. See https://pbs.proxmox.com/docs/installation.html#recommended-server-system-requirements

Q: Can I install Proxmox Backup Server on Debian, in a VM, as LXC or alongside with Proxmox VE?
A: Yes, but all this is not the recommended setup (expert use only).

Q: Where can I get more information about upcoming features?
A: Follow the announcement forum and pbs-devel mailing list https://lists.proxmox.com/postorius/lists/pbs-devel.lists.proxmox.com/ and subscribe to our newsletter https://www.proxmox.com/news and see https://pbs.proxmox.com/wiki/Roadmap.
 
Is it possible to have th proxmox-backup-client-static in the latest version? Great job
 
Is it possible to have th proxmox-backup-client-static in the latest version?

Do you mean the dedicated pbs-client repo? It was indeed lagging behind, but the CDN sync there should be finished now.
 
  • Like
Reactions: Johannes S
Do you mean the dedicated pbs-client repo? It was indeed lagging behind, but the CDN sync there should be finished now.
exactly.

Code:
proxmox-backup-client-static_4.2.0-1_amd64.deb     28-Apr-2026 22:16             9296220

evrything is fine. thanks for the update.
 
Great work moving s3 across the finish line. Now the next (really important) step there is immutability. So far I'm holding out for that to get done rather than start using Veeam for our Proxmox systems but that's literally the level of tradeoff we're having to consider.

Another piece (for both PBS and PDM) is fixing authentication and authorization to the point where it's at the same level as PVE. Right now forcing the users to login, then wait for an admin to assign them to a group, then log back in is a management chore and a half - tolerable for PBS but causing PBM to be a nonstarter. My (possibly wrong) understanding is that PBS and PDM share much of their auth code so fixing one fixes both.
 
  • Like
Reactions: je1000
Great work moving s3 across the finish line. Now the next (really important) step there is immutability. So far I'm holding out for that to get done rather than start using Veeam for our Proxmox systems but that's literally the level of tradeoff we're having to consider.

Are you aware that you can already implement kind of immutability with an offsite PBS and pull-syncs? For this to work you don't even need to open any ports on the offsite PBS firewall as long as it can reach the port 8007 on your local PBS: https://pbs.proxmox.com/docs/storage.html#ransomware-protection-recovery
Basically you use a combination of said firewall rules and read-only access API tokens.
And if somehow an bad actor manages to get root/admin access to your veeam server he can also modify your "immutable" repository, in that regard there is no difference between Veeam or PBS.
 
  • Like
Reactions: UdoB
And if somehow an bad actor manages to get root/admin access to your veeam server he can also modify your "immutable" repository, in that regard there is no difference between Veeam or PBS.
I was specifically talking about S3 immutability specifically, which you did not address.
To your point, if an attacker is in backend systems long enough to modify the retention policy in Veeam, wait for the retention to expire, and then delete stuff... yeah. At that point no amount of resilient design is going to overcome such an epic failure in monitoring, configuration management, etc. The same attacker would just as easily pwn the offsite PBS, so either way the data is gone.
 
  • Server-side encryption/decryption for sync jobs

I have a question regarding the limitation to push-sync jobs:

  • Server-side encryption for push sync jobs.Push sync jobs can now be configured to encrypt snapshots before sending them to remote datastores.For example, this can be used to push snapshots to Proxmox Backup Server instances in less trusted offsite locations.Furthermore, pull sync jobs can be configured to decrypt snapshots encrypted on the remote datastores.


This would mean that if I want to encrypt my backups I would have to use a push-sync job. This is something I would really like to avoid since pull-sync jobs allows to close all ports on my remote PBS except a VPN connection to the management client. Are there any ideas to implement something like encryption even for a pull-sync job? Of course I could always do a workaround with localhost as remote to first push-sync and encrypt to a local namespace "encrypt- first" before the pull-sync but this needs additional storage space and adds complexity.
 
if I want to encrypt my backups I would have to use a push-sync job

If the backups are already encrypted though, AFAIK they just remain encrypted. The pull-sync/remote PBS knows they exist but doesn't have the key. If I understand (?), the new feature would be to encrypt an unencrypted backup before pushing.
The same attacker would just as easily pwn the offsite PBS
But they would need to access the remote PBS directly to compromise it, which is generally not possible unless using push sync...unless maybe you're discussing the case where a network is compromised and no one notices until past the remote PBS's retention period? We have a longer retention on our off site PBS, which we have using slower/cheaper storage.
 
f the backups are already encrypted though, AFAIK they just remain encrypted. The pull-sync/remote PBS knows they exist but doesn't have the key. If I understand (?), the new feature would be to encrypt an unencrypted backup before pushing.

Well I can imagine following usecases: You have a local PBS on trusted hardware for fast restores, you would like not to use encryption so even if somehow you lost the key you still have backups and also to reduce CPU load for compression on the PVE nodes. If however I want to pull these backups from a remote PBS I would like to be able to be able to encrypt it before since the pull since I might not be able to trust the hardware completely (due do it being a virtual cloud server or a PBS-as-a-service instance).
To your point, if an attacker is in backend systems long enough to modify the retention policy in Veeam, wait for the retention to expire, and then delete stuff... yeah. At that point no amount of resilient design is going to overcome such an epic failure in monitoring, configuration management, etc. The same attacker would just as easily pwn the offsite PBS, so either way the data is gone.

Not necessarily. If the remote PBS has no open incoming ports but is still able to open a connection to your local PBS it's basically impossible for any bad actor to access it. Because: No open port, no connection.
I agree with you on immutable s3 backups though.
 
  • Like
Reactions: UdoB
the limitation is intentional - if you don't trust your remote PBS with unencrypted backups, you also cannot trust it with the key to encrypt, since that gives it access to the unencrypted backups.
 
  • Like
Reactions: Johannes S
the limitation is intentional - if you don't trust your remote PBS with unencrypted backups, you also cannot trust it with the key to encrypt, since that gives it access to the unencrypted backups.
Makes sense, thanks for the explaination. Now I'm feeling stupid ;) I guess following scheme would be problematic for different reasons or did I miss something:
- Remote PBS initiates a pull-sync via API
- Local PBS will encrypt the pulled chunks if (and only if) a config flag "encrypt before pull" is set on the local datastore or in the API call

So the idea is that the encryption key will never leave your local PBS, is this idea feasible or problematich for different reasons e.g. that it might break other things?
 
Makes sense, thanks for the explaination. Now I'm feeling stupid ;) I guess following scheme would be problematic for different reasons or did I miss something:
- Remote PBS initiates a pull-sync via API
- Local PBS will encrypt the pulled chunks if (and only if) a config flag "encrypt before pull" is set on the local datastore or in the API call

So the idea is that the encryption key will never leave your local PBS, is this idea feasible or problematich for different reasons e.g. that it might break other things?
that could be implemented, but is not a nice approach from code design/architecture point of view. the sync happens as a regular client talking to the other PBS, this would require a special "encrypting backup reader" feature on the other end and would split the sync config on both ends
 
  • Like
Reactions: Johannes S
So the idea is that the encryption key will never leave your local PBS, is this idea feasible or problematich for different reasons e.g. that it might break other things?
The usefulness of an encrypting pull has also been touched in the issue discussed in the bugzilla entry https://bugzilla.proxmox.com/show_bug.cgi?id=7251

Technically it is definitely possible, but opens a few more complications with key handling. The less trusted pulling remote must not be involved in the key handling and key selection mechanism. Further, the guarantees to keep the key around for decryption will be shifted. It could be coupled to a remote acting as client.

Further, such a mechanism will need to re-encrypt the index files before or while pulling, as otherwise the remote would not even know the digests of encrypted chunks to be pulled. It cannot use the plain chunk digests.
 
Last edited:
  • Like
Reactions: Johannes S
Thanks for the further explainations, I'm fine with current state of things, after all a workaround via "localhost" as "remote" relay is possible
 
So, I noticed the new kernel 7 come up in my (non-enterprise) PBS. So I tried installing it. I have PBS with enterprise license, but this was an extra backup server I had on an old HP Microserver Gen 8.

So the installation went OK, and it rebooted. But then crashed within the day. It looks like it crashed in the middle of a backup job. I was at home when it crashed, so I logged in remotely to the ILO and managed to get it to reboot, but it was clearly having troubles with the 7.0 kernel. I pinned it back to the previous 6.17.13(??) kernel and it seems to be fine again. Just thought I'd provide that feedback.