Snapshot deletion too slow

PatoGB

New Member
Aug 4, 2025
4
0
1
Hi,

I have Proxmox 9, two hosts configured with iSCSI (10Gbps) to an IBM Flash System 5200 Storage, the deployment and migration of VMs works very well, but when I delete snapshots it is too slow, does anyone know how to improve this part or is it a flaw in the virtualizer?

Thanks,
 
Hi,
what storage type are you using for the VM disks? If the VM disks are qcow2 files on a network storage, removing snapshots is recommended to be done while the VM is shut down.

There is a tech preview of the snapshot-as-volume-chain feature, which allows much faster snapshot operations: https://pve.proxmox.com/pve-docs/chapter-pvesm.html#storage_lvm It can be configured for LVM or directory-based storages.
 
Hi Fiona,

The snapshot-as-volume-chain feature value is 1, disk is qcow2, and LVM.
1759940262915.png

Show task viewer:erase disk after delete snapshot (this is the process that takes too long), and it doesn't allow me to migrate the VM.

1759940439226.png


1759940589309.png
 

Attachments

  • 1759940088221.png
    1759940088221.png
    61.1 KB · Views: 5
Oh, I see. Yes, that is because you have the saferemove flag enabled on the storage: https://pve.proxmox.com/pve-docs/chapter-pvesm.html#pvesm_lvm_config The VM shuold already be functioning fine and not be blocked by that removal, just other VM operations.

If you need that flag, I suggest to keep it. There is an improvement being worked on currently, to make this much faster when the storage supports discard: https://lore.proxmox.com/pve-devel/mailman.116.1755518974.385.pve-devel@lists.proxmox.com/
 
  • Like
Reactions: Kingneutron
Oh, I see. Yes, that is because you have the saferemove flag enabled on the storage: https://pve.proxmox.com/pve-docs/chapter-pvesm.html#pvesm_lvm_config The VM shuold already be functioning fine and not be blocked by that removal, just other VM operations.

If you need that flag, I suggest to keep it. There is an improvement being worked on currently, to make this much faster when the storage supports discard: https://lore.proxmox.com/pve-devel/mailman.116.1755518974.385.pve-devel@lists.proxmox.com/
Excellent news,

I hope it improves soon in some aspects of the environment, such as replication and access to FC/iSCSI SAN storage.
 
Show task viewer:erase disk after delete snapshot (this is the process that takes too long), and it doesn't allow me to migrate the VM.
The problem is that saferemove is throttled by default to a whooping 10MBytes/s. If you check with ps -ef while a snapshot is being removed you'll see a cstream process zeroing the snapshot volume. Meanwhile the very welcome improvement of using discard gets published, use this to increase the limit to whatever you see fit:

Code:
pvesm set <PVE-storage-name> --saferemove_throughput <Bytes/s>

I.e to raise the limit to 600MBytes/s for storage "lvm--LVM01": pvesm set lvm--LVM01 --saferemove_throughput 629145600

I would really love to see that setting exposed in the webUI or at least mentioned in the docs with instructions on how to modify it. Oh, and a higher default too :)
 
Last edited:
  • Like
Reactions: Johannes S
The problem is that saferemove is throttled by default to a whooping 10MBytes/s. If you check with ps -ef while a snapshot is being removed you'll see a cstream process zeroing the snapshot volume. Meanwhile the very welcome improvement of using discard gets published, use this to increase the limit to whatever you see fit:

Code:
pvesm set <PVE-storage-name> --saferemove_throughput <Bytes/s>

I.e to raise the limit to 600MBytes/s for storage "lvm--LVM01": pvesm set lvm--LVM01 --saferemove_throughput 629145600

I would really love to see that setting exposed in the webUI or at least mentioned in the docs with instructions on how to modify it. Oh, and a higher default too :)
Hi Victor,

Thanks for your help. I'm going to apply that change. Do you know what the maximum value for that parameter is? A great improvement for version 9 would be to allow vdisk Thin formatting for SAN Storage volumes.
 
Do you know what the maximum value for that parameter is?
As much of the performance of your SAN/network you want to devote to a saferemove operation without impacting other operations ;)

A great improvement for version 9 would be to allow vdisk Thin formatting for SAN Storage volumes
I don't see that happening any time soon unless some sort of cluster aware filesystem with thin provisioning gets properly implemented on Linux. I.e. a VMWare VMFS alternative or someone takes OCFS2 to modern times. Many SAN already provide thin provision, compression and/or de-duplication internally, so not having thin provision on the client side isn't that much of an issue.

Anyway, for me this is just a compromise to be able to reuse other hypervisors hardware and eventually migrate to Ceph in like 90% of cases, being the remaining 10% special cases of applications that require superlow disks access latencies or stubborn customers that simply love SAN.
 
  • Like
Reactions: Johannes S