Compression drastically affects Proxmox Backup Server performances

mauro70

New Member
Aug 7, 2023
15
3
3
Dear All

I would like to share with you my thoughts about proxmox backup server.

First of all let me say that I'm testing PVE as potential replacement of my VMWare cluster and I'm really impressed about the product and its capabilities, I'm really happy about it, should it meet all my requirements, it will be one of the best paid subscriptions I have ever had.

However, in my view, the real sore point of this platform is the performance of the backup server. My test-bed is very simple: a two-nodes cluster with a NAS in the middle shared by the two nodes via NFS. The two nodes have 2 x 512GB NVMEs each (note: not mSATA SSD, NVMEs). The 3 devices are connected by means of a 8 x 10GB ports switch (in order to have also network link redundancy).
The proxmox backup server is installed inside the PVE node. Can't say if it is the best option or not (in the documentation this is not suggested), but considering the results I got, I think that probably this will be irrelevant. In any case in this way PBS can take the full power of the node without any hypervisor in the middle (I hope).

The first datastore I setup in PBS was an NFS shared folder. So I put in place the backup procedures and discovered that it was going at an average speed of 200MB/sec. Probably not too bad, but of course I was expecting a bit more, especially with the 10GB connection.

So I read the several topics posted here reporting performance "issues" with PBS, and tried to follow all suggestions identified in each of them (for example, changing the nr. or max-workers, increasing the network bandwitdh -not my case-, etc etc). I got no difference, the average speed was almost around 200MB/sec.

Then, in few topics I read statements like "PBS has been designed having in mind the SSDs as storage for backups because SSD nowadays are cheaper than before". So I thought "Actually, my NAS has standard spinning disks, that could justify the poor performance I'm hitting"

Considering that I want to make Proxmox the virtualized platform to be offered to my clients, I didn't give up, and did an interesting comparison: I have added the PVE "local" storage (in other words, the folder /var/lib/vz) as DATASTORE to PBS.
And then I did 3 tests by backing up a 32GB VM (powered off): one using PBS with the above DATASTORE, one using the "local" storage with ZSTD compression and one using the "local" storage with no compression. Here the results:

(-) backup via PBS: average speed 220MB/sec (see backup-pbs.log)
(-) backup via local storage, ZSTD compression: average speed 260MB/sec (see backup-local-zst-compression.log)
(-) backup via local storage, no compression: average speed 990MB/sec (see backup-local-no-compression.log))
and, let me say again: all backups are stored into the folder /var/lib/vz which is in the NVME.

This leads me to the following conclusion: compression definitively has a severe impacts on the backup performances (irrelevant if mandatorily applied by PBS or manually set). With a such evaluation, I'm tempted to think that the speed of the backup performed on the NFS share could be better than 200MB/sec if compression could be disabled on PBS side. However, as far as I read, compression in PBS cannot be disabled. And this is the key point at the end of this post.

Please note that I'm not saying that PBS is a bad tool: I fully understand that it offers several other capabilities not embedded into the simple "local" backup (i.e. incremental backups). I'm simply trying to say that, in my view, it should be given the possibility to disable the compression in PBS. We all agree that SSD nowadays are cheaper than before: a user could prefer to waste a bit more space in favour of the speed of the backup, so why not giving such possibility? considering that it is the only backup tool available in the platform, such enhancement should be an option to be taken into serious consideration.

Kind Regards
Mauro
 

Attachments

  • backup-local-no-compression.log
    1.8 KB · Views: 7
  • backup-pbs.log
    4.8 KB · Views: 4
  • backup-local-zst-compression.log
    4.3 KB · Views: 5
I doubt that disabling compression will help in your use case.

Doing benchmarks is quite complex and vzdump and PBS are really different tools. PBS has a lot of unique features.
 
Well, let's try to be pragmatic: in all topics I have read in this forum related to bad backup performances, I haven't seen any real root-cause and any effective solution. In some threads, I have seen others speculating on the possible impacts of the compression, but again, never "verified", nor formally confirmed.
I know that doing benchmarking is a quite complex matter, however what I did probably is the best estimation of the root-cause.
Both the first and the second tests I did have the ZSTD compression enabled, and the difference is about 15% (40MB/sec).
It is quite straightforward to think that such 15% difference is the overhead introduced by PBS for its internal unique features. But at the same manner, it is quite straightforward to think that most of the overhead is due to the compression. And, as consequence, disabling the compression on PBS side, it is credible to think that the performance of a backup would be the third test decreased of the 15% (about 850MB/sec).
Otherwise, it would not yet explain the (sorry to say "disappointing") performances with NVMEs...

Again, I'm not saying that PBS is a bad tool, but that probably it is the case to seriosly look into this: a quick way to **try** to address this matter without too much effort would be to implement the configuration to allow disabling the compression. Probably it would not be the final resolution, but I have the strong feeling it would be the closest one to the resolution of the root-cause.
Kind Regards
Mauro
 
strong feeling
We do not believe in feelings. We benchmarked and tested PBS in almost all configurations with all compressions methods and without.

=> disabling compression will not help, just follow the PBS deployment guides (local SSD/NVMe storage) and you will get good performance - do not mount network storage as datastore.
 
>> just follow the PBS deployment guides (local SSD/NVMe storage)
which is exactly what I did, and the results were cristally clear (no feeling in that case). My tests and results are here available, anybody else can do the same test-bed I did and is free to confirm/contradict.
That said, my impression is that nobody wants to admit (or deeply verify) that compression affects the backup performance --> really no harm no foul, I have just shared my experience, as several others did when reporting the same problem.

>> do not mount network storage as datastore.
Surely it is not the optimal choice, I can definitively agree. But the results I got would not justify to purchase dedicated SSD/NVME based storages for PBS seen that the performances are inline with a low-level storage with standard spinnig disks (even mounted via NFS).

Cheers
M

PS: by the way, the documentation I followed is https://pbs.proxmox.com/docs/proxmox-backup.pdf, and it does not say anything special about SSD/NVME datastores. Happy to follow other documents specifically designed for SSD/NVME if existing.
 
is maybe your cpu the bottle neck? can you do a
Code:
proxmox-backup-client benchmark

best with the '--repository' parameter so the actual end-to-end performance of the pbs is measured too?

note that for pbs backups, most operations are done on the client side, so a weak cpu on the pve node will hurt pbs performance
 
which is exactly what I did
In order to get comparable results, please use the recommended setup:

- dedicated PBS Server (not a VM)
- local SSDs as datastore (enterprise SSDs)
- make sure that you so not run incremental backups like you did (From your log: INFO: backup was done incrementally, reused 13.07 GiB (40%)

As I said, benchmarking is a lot of work and complex. And we did all this multiple times (and we can say that disabling compression will not solve your issue)

Almost every post here about performance issues are about bad performance. Why? Because only users with issues post here ...
And: Most are home users, mounting datastore from cheap(est) NAS devices or even from cheapest cloud storage providers and wonder, why they do not get fast backups.

And yes, I agree that PBS is not working great on slow HDDs and slow CPUs, so the idea of using oldest boxes for PBS is a really bad idea.

=> use only modern CPU and fast SSD on a dedicated server if you need fast performance.
 
  • Like
Reactions: ITT and Dunuin
>> - dedicated PBS Server (not a VM)
it was installed as part of PVE, so running on the bare-metal PVE host. A dedicated host can be envisaged, but only if I'll be sure it is worth to do that, you know what I mean.

>>- local SSDs as datastore (enterprise SSDs)
the datastore was the PVE folder /var/lib/vz (so in the main PVE file system on the NVME)

>> - make sure that you so not run incremental backups like you did (From your log: INFO: backup was done incrementally, reused 13.07 GiB (40%)
That's strange, because before doing any test, I deleted the previous backup... Can't say why it said "incremental", as I made sure that the datastore was empted before doing the new test.

>> from cheap(est) NAS devices [...] why they do not get fast backups.
Tom, I fully understand your point and fully agree, I would think the same if I were in your shoes. Maybe it was not clear so far, but I have not insisted further about the backup in NAS, I got around 200MB/sec and I'll stay with that. However, I'm keeping the point because the three comparison tests I did above were perfomed **all** with the datastore in the NVME.

>> use only modern CPU and fast SSD on a dedicated server if you need fast performance.
Ok, point taken, as I said, if it will be worth, I'll do that, but this is conditioned by the clear understanding of what I experienced now: I carried out three tests, made on the same server and on the same datastore and the results are definitively different, and from the command line, the most evident difference in the three tests is the compression option (enabled in the first two, disabled in the third).

To reply to dcsapak, here the results:

proxmox-backup-client benchmark --repository root@pam@192.168.163.66:var-lib-vz
Uploaded 642 chunks in 5 seconds.
Time per request: 7815 microseconds.
TLS speed: 536.70 MB/s
SHA256 speed: 343.03 MB/s
Compression speed: 321.54 MB/s
Decompress speed: 455.20 MB/s
AES256/GCM speed: 924.55 MB/s
Verify speed: 191.74 MB/s

The recommended requirements are "CPU: Modern AMD or Intel 64-bit based CPU, with at least 4 cores". The CPU in my node is Intel(R) Xeon(R) D-2146NT CPU @ 2.30GHz. Surely a bit old.
Probably, in more modern and powerful CPUs the overhead introduced by the compression is negligeable. In less modern CPUs like mine instead, the overhead has a major impact. But this is another incentive to make the compression option configurable (like many other backup tools have), as it is not very fair to say to a client "purchase a powerful server if you want good performances with PBS", especially when the three tests above show that even with an old CPUs it is possible to reach 900GB/sec.

Cheers
M
 
  • Like
Reactions: lucius_the
That's strange, because before doing any test, I deleted the previous backup... Can't say why it said "incremental", as I made sure that the datastore was empted before doing the new test.
Did you wait for at least 24 hours and 5 minutes after the prune and then ran a GC task? If not, nothing got deleted and the chunks are still there
and will be reused.
 
No, I didn't... I didn't guess that! Ok, it might sound stupid, but I pressed "removed" and in my idea everything is removed...
 
In order to get comparable results, please use the recommended setup:

- dedicated PBS Server (not a VM)
- local SSDs as datastore (enterprise SSDs)
- make sure that you so not run incremental backups like you did (From your log: INFO: backup was done incrementally, reused 13.07 GiB (40%)

As I said, benchmarking is a lot of work and complex. And we did all this multiple times (and we can say that disabling compression will not solve your issue)

Almost every post here about performance issues are about bad performance. Why? Because only users with issues post here ...
And: Most are home users, mounting datastore from cheap(est) NAS devices or even from cheapest cloud storage providers and wonder, why they do not get fast backups.

And yes, I agree that PBS is not working great on slow HDDs and slow CPUs, so the idea of using oldest boxes for PBS is a really bad idea.

=> use only modern CPU and fast SSD on a dedicated server if you need fast performance.

Do you have any benchmarks you can share? Just curious.

After reading quite a bit here we are dumping our existing PBS setup with spinners and getting some fresh lab equipment in. Seems like this (current non-optimal) setup is giving us some issues.

3rd gen Xeons with 8/16TB NVMe drives.

It sounds like optimal benchmark is a full, not an incremental?

Great thread!
 
No, I didn't... I didn't guess that! Ok, it might sound stupid, but I pressed "removed" and in my idea everything is removed...
But that's not how it works. You only delete the index files of those backups, not the chunk files that contain all the data, as deleting chunks comes at a high cost as, because of deduplication (where multiple VMs can share the same data that is only stored once), every single chunk file of every backup snapshot needs to be read+written, which means millions over millions of IO hitting the disks to actually delete stuff.
 
Last edited:
But that's not how it works. You only delete the index files of those backups, not the chunk files that contain all the data, as deleting chunks comes at a high cost as, because of deduplication (where multiple VMs can share the same data that is only stored once), every single chunk file of every backup snapshot needs to be read+written, which means millions over millions of IO hitting the disks to actually delete stuff.
Ok, thanks for spotting this. Tomorrow I'm going to re-do the tests keeping in mind this --> as I cannot wait 24 hours between one test and the other, I'll simply drop the datastore, remove the /var/lib/vz content, and restart. It will take a bit of time but surely less than wating 72hours to complete the three tests :)
 
Ok, thanks for spotting this. Tomorrow I'm going to re-do the tests keeping in mind this --> as I cannot wait 24 hours between one test and the other, I'll simply drop the datastore, remove the /var/lib/vz content, and restart. It will take a bit of time but surely less than wating 72hours to complete the three tests :)
If it's just a test setup you could set the system clock 1 day and 5 minutes ahead. Between the prune and GC.
 
>>- local SSDs as datastore (enterprise SSDs)
the datastore was the PVE folder /var/lib/vz (so in the main PVE file system on the NVME)

Please specify your NVMe model and what filesystem do you use?

proxmox-backup-client benchmark --repository root@pam@192.168.163.66:var-lib-vz
Uploaded 642 chunks in 5 seconds.
Time per request: 7815 microseconds.
TLS speed: 536.70 MB/s
SHA256 speed: 343.03 MB/s
Compression speed: 321.54 MB/s
Decompress speed: 455.20 MB/s
AES256/GCM speed: 924.55 MB/s
Verify speed: 191.74 MB/s

The recommended requirements are "CPU: Modern AMD or Intel 64-bit based CPU, with at least 4 cores". The CPU in my node is Intel(R) Xeon(R) D-2146NT CPU @ 2.30GHz. Surely a bit old.
Your CPU is 5 years old and was on launch one of the slowest Xeon CPUs. So do not purchase low power CPUs and expect highest performance.

Compare your results with https://forum.proxmox.com/threads/how-fast-is-your-backup-datastore-benchmark-tool.72750/

So yes, the compression issue seems related to the CPU in your host.
 
moreover, PBS power is in next backups, it blast full vzdump even with your numbers, even PBS local HDD blast vzdump local SSD.
incremental/dedup backup is the key
+ qemu dirty bitmap is the second key (which is only available with snapshot-or-suspend mode backup and if previous snapshot-or-suspend mode backup was done earlier)
 
Last edited:
which is only available with suspend mode backup and if previous suspend mode backup was done earlier
And also with snapshot-mode backups. Once you shut down the VM (so no stop-mode backups) or reboot the server the dirty bitmap will be discarded, forcing you to read and hash the whole virtual disk again. But yes, if you store 10 backups of each VM with only 1% changes between the backups, the first backup will be slow and read + transfer 100%. The next ones will only consume additional ~1% space and and ~99% of that VM can be skipped, so massive speed and space improvement. VZDump will always read + transfer + store the whole 100%.
 
Last edited:
proxmox-backup-client benchmark --repository root@pam@192.168.163.66:var-lib-vz
Uploaded 642 chunks in 5 seconds.
Time per request: 7815 microseconds.
TLS speed: 536.70 MB/s
SHA256 speed: 343.03 MB/s
Compression speed: 321.54 MB/s
Decompress speed: 455.20 MB/s
AES256/GCM speed: 924.55 MB/s
Verify speed: 191.74 MB/s
with this result the, 200MIB/s throughput makes kinda sense, since what the backup does kind of mimics the verification: (read data + checksumming + compression) and that is capped on your cpu to ~200MiB/s
(there is some parallelism and pipelining going on so it's not exactly that, but in a real world there is also networking involved which makes it a bit slower than the pure cpu based benchmarks))

maybe omitting compression could gain a bit performance in your case, but probably not much. also in many setups the cpu is not the bottleneck but the disk and network and here the compression gives an advantage (less data to send/write)
 
another way to say same thing, pbs incremental/dedup next backup allow read only fast vm disk then write only changed data, so destination can be slow, imo, local ssd for pbs is recommended in datacenter case / or many many VMs or if fast restore is needed too.
 
>> Please specify your NVMe model and what filesystem do you use?
NVME: WDS500G3X0C Western Digital SN750 M.2 500 GB PCI Express 3.0 NVMe
Filesystem: the filesystem used by PVE installer when formatting the disk for the installation: ext4

>> with this result the, 200MIB/s throughput makes kinda sense, since what the backup does kind of mimics the verification: (read data + checksumming + compression) and that is capped on your cpu to ~200MiB/s
And that's exactly the point. I agree with all of you that the CPU is the limiting factor in my case, so why not having in general an option to allow the user to decide if apply compression or not? In the end, compression is not "vital", a client can accept to waste a bit more of disk space and keep using his own old commodity hardware rather than purchasing a dedicated servers with expensive CPUs.

Anyway, thanks to all for your feedbacks.
I'm going to repeat my tests making sure that backup will be the first full one, in order to avoid any side-effects introduced by the "incremental" backup. Further, I will make sure that the VM under backup will be switched off and the type of backup will be "stop" (no suspend, no snapshot).

keep you posted
Regards
Mauro

 
Last edited:
  • Like
Reactions: lucius_the

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!