does look more like an IO issue to be honest.. how is your ZFS configured? could you post "zpool status", "zpool iostat 10" and "zfs get all ..." (the last one for each zvol used by the two VMs you posted)?
https://bugzilla.proxmox.com/show_bug.cgi?id=3752
this is a known issue that requires some significant refactoring, but with the recent PDM <-> PBS work IIRC we now have a streaming implementation that we could port over to PVE as well.
it would probably help if you'd include more relevant details
- VM config of the slow one and a fast one
- storage setup
- any relevant logs
- an actual benchmark (CPU, disk, ..) showing the performance difference
poste mal "proxmox-backup-tool status". normalerweise reicht es aus, alte kernel pakete zu entfernen. wenn das nicht funktioniert, musst du eventuell haendisch aufraeumen (/dev/disk/by-uuid/FB5D-B209 wohin mounten, dann aufraeumen, dann unmounten)
that is also very dangerous!
you can try moving some of the .chunk hierarchy to other storage (without adding symlinks) and then run GC. it will complain about missing chunks, but should hopefully clear up enough space so that you can move them back and re-run GC a second time.
don't attempt...
there's two test kernels here:
http://download.proxmox.com/temp/kernel-6.8-ice-memleak-fix-1/ (6.8 for Bookworm)
http://download.proxmox.com/temp/kernel-6.14-ice-memleak-fix-1/ (6.14 for Trixie)
with a potential upstream fix. feedback would be appreciated!
thanks, that was indeed wrong: https://git.proxmox.com/?p=proxmox-offline-mirror.git;a=commitdiff;h=6945134b3ba345687f3ea84870fd98204947adaf
next bump will include the fix!
https://lore.kernel.org/all/20250825-jk-ice-fix-rx-mem-leak-v2-1-5afbb654aebb@intel.com/ seems like a likely fix for this memory leak, a test kernel is available here and we'd appreciate feedback on whether it fixes your issues:
http://download.proxmox.com/temp/kernel-6.8-ice-memleak-fix-1/...
die ESP wird nur waehrend kernel oder bootloader updates gemounted, wenn proxmox-boot-tool verwendet wird. daher scheint sie in so einem setup auch nicht in `df -h` auf.
how is your datastore's storage configured? if you cannot add more space, you really need to free up some space (for example, by deleting some more snapshots - but you need to ensure no new backups or syncs are attempted that immediately undo this!) and then trigger a manual GC..
if you have backups, those contain a copy of the guest config that you can drop into /etc/pve/local/lxc or /etc/pve/local/qemu . else, you need to recreate the configs from the current running state and memory