VM Backup sometimes causes huge CPU/RAM usage

BloodyIron

Renowned Member
Jan 14, 2013
336
34
93
I don't yet know a pattern for this, but in the last few weeks (I think) sometimes one or a few of my VMs will have problems when being backed up.

I'm pretty sure the backups are completing just fine. I don't have Proxmox Backup Server present in this particular environment, so I'm doing full VM backups while they're online. And generally >99% of my VMs are Linux.

I just had the effect last night, and during the backup of the VM, the CPU and RAM usage in the VM itself spike hugely while the backup is happening. This leads to operational problems. Here's a screenshot:

1735403334163.png


===========
Now the REALLLYYYY weird thing is this doesn't happen every day, AND it happens to DIFFERENT VMs each time.

The system "solves" itself as you can see in the picture, but I really think it's worthwhile figuring out what on earth is going on here and what I can do about it.

Typically the Guest OS' are Ubuntu, of 24.04/22.04 flavours, so I don't think it's distro/generational specific. And they do have qemu-guest-agent installed and operational in the VM.

The package versions for the Proxmox VE Node in the above screenshot are:

====
Code:
proxmox-ve: 8.3.0 (running kernel: 6.8.12-5-pve)
pve-manager: 8.3.1 (running version: 8.3.1/fb48e850ef9dde27)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-5
proxmox-kernel-6.8.12-5-pve-signed: 6.8.12-5
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
proxmox-kernel-6.5.13-6-pve-signed: 6.5.13-6
proxmox-kernel-6.5: 6.5.13-6
pve-kernel-5.13.19-1-pve: 5.13.19-3
ceph-fuse: 16.2.15+ds-0+deb12u1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: not correctly installed
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.1
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.2-1
proxmox-backup-file-restore: 3.3.2-2
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.3
pve-cluster: 8.0.10
pve-container: 5.2.2
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-2
pve-ha-manager: 4.0.6
pve-i18n: 3.3.2
pve-qemu-kvm: 9.0.2-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.2
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1
====

In this particular PVE Cluster there are two nodes, and the issue _I THINK_ happens on both.

As you can see the software on each node is rather current as of only a few weeks old. Non-subscription repo.
=====


Anyone have any ideas?
 
Have you tried fleecing?

Ahhh some new feature stuff since v8.2 I haven't looked at yet, thanks! I'll try that! I'm also going to try changing "zstd threads" from default (1) to 0 (which says uses half of available cores, as I have plenty not doing much at that time of day.

I don't have appropriate local storage for fleecing so going to use the same network storage for the VM disk images (I'll have to review a better option in the future).

So yeah, let's see how that goes, thanks! If any other ideas come up, please let me know :) Might take a few days before I see results.
 
  • Like
Reactions: Kingneutron
Okay so fleecing has not fully solved the problem. I still have every now and then systems that spike their RAM into swap, just like the original example, and result in outages.

I'm going to leave fleecing on as it seems like probably a good idea, but still need help on this one please.
 
  • Like
Reactions: Kingneutron