Slow disk formatting with high IO Pressure Stall

freeman307

New Member
Apr 8, 2026
2
0
1
I started a thread on the Proxmox reddit which quickly went down a rabbit hole and finally hit the bottom. https://www.reddit.com/r/Proxmox/comments/1sf3l9y/high_io_pressure_stall_during_os_install_iscsi/

TL;DR
I am experiencing high IO pressure stalls peaking at 30% along with extremely long disk formatting times while installing any OS on a VM. The VM is sitting on a LVM backed by ISCSI on a Pure X50 array with qcow2 disks format and snapshots as volume chain enabled on the LVM. I initially thought this could be a multipathing or network issue but after lots of testing it turns out having 'Discard' enabled on the qcow2 disk with snapshots as a volume-chain on the LVM causes the slow disk formatting and high IO pressure stall. Turning off 'Discard' on the disk results in normal format times and no IO pressure stall, but I'm pretty sure I want discard on for storage to be reclaimed. This same behavior does not happen on NFS backed storage (same array) with snapshots as a volume-chain enabled with qcow2 disk and 'Discard' enabled. Turning off snapshots as a volume-chain on the ISCSI backed LVM forces raw disks and with 'Discard' on there is no slow formatting or IO stalls.

Is this known or expected or is this a bug/limitation of LVM with snapshot as a volume-chain?
 
I know at least, that the snapshot-as-chain feature which uses qcow comes with a performance penalty. This was described by @bbgeek17 in his writeup on these snapshots:
https://kb.blockbridge.com/technote/proxmox-qcow-snapshots-on-lvm/index.html#performance-degradation

In general LVM should have good performance though (see another piece by bbgeek):
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/

It might be however, that I'm messing things up and your issue is not actually related to it.
 
Yeah I understand a performance penalty, but this is on a fresh LVM, fresh VM and fresh qcow2 disk, and it only happens at disk formatting. After the formatting of the disk the OS installs at a normal speed and I'm able to saturate the 25gb networking with disk tests on the VM.

We are not planning on having snapshots run constantly, that is what we have Veeam for. I guess we are still stuck in a VMware mindset where it was nice to quickly snapshot a VM when testing or running a patch/upgrade and be able to revert back if all went to crap without having to restore the entire VM from backups. Snapshots is a nice to have, not required, but this is a crazy performance hit during disk formatting with the 'Discard' option on. Still hoping for a concreate answer whether this is a bug or limitation with 'Discard' enabled and terrible disk formatting performance.