Currently, I'm running a large Proxmox host node with small VMs without issues (as in tons of RAM and each VM is usually somewhere in the 2 GB RAM, 25 GB disks, ... range). However, at another host with limited RAM and CPU power, but only few VMs with large disks (multiples of 100 GB up to 800 GB) and the system seems to get unresponsive during backups (VMs run slow and the Proxmox interface gets completely unusable). Looking at monitoring it seems the otherwise mostly under-utilized nodes jumps to heavy loads (system load > CPU count and heavy disk I/O, although disk I/O probably is expectable of course). So I'm wondering what I could do to improve the situation (besides increasing the specs of the host).
As it's the default choice, I'm using ZSTD. Might this cause this quite some load? How does the compression work? Is compression done in a streaming fashion and should it scale well with large disks or is trying compression on 800 GB disks a bad idea in general?
EDIT: Never mind, the issues still persist even when using RAW backups and in both cases the system gets unresponsive after ~280 GiB. However, if anybody has any ideas why these issues might occur I'd still be looking for ideas.
EDIT2: Odd. After trying multiple different setups, it only seems to occur when using NFS shares for the backups (doesn't matter whether they are local as in same rack or off-site either). Well, probably a sign I should finally transition everything to PBS.
Can be considered solved for me.
Last edited: