NAS iSCSI LUN and LVM-Shared on top - bad IO while having VM delete ongoing

jsk73

New Member
Mar 23, 2026
1
0
1
Hello everyone,

My homelab proxmox cluster with 2 node setup is:

- I have a TrueNAS storage and created HDD and SSD iSCSI share to proxmox cluster 2 node.
- iSCSI configured with multipath 3 path: (separate physical NICs, 1Gbps each)

Code:
root@pve1:~# multipath -ll
mpathb (36589cfc0000000daa067606013d13fe0) dm-6 TrueNAS,iSCSI Disk
size=730G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 6:0:0:1 sdb 8:16 active ready running
  |- 7:0:0:1 sdd 8:48 active ready running
  `- 8:0:0:1 sdc 8:32 active ready running
mpathc (36589cfc000000b1c40188c025655bf67) dm-7 TrueNAS,iSCSI Disk
size=2.8T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 9:0:0:0  sde 8:64 active ready running
  |- 10:0:0:0 sdf 8:80 active ready running
  `- 11:0:0:0 sdg 8:96 active ready running

- Then I create 2 lvm-shared on top of those 2 LUN with: allow snapshot-chain, shared enabled, wipe Removed Volume enabled, set saferemove_throughput to 1Gbps
- VMs disk put OS on SSD (sata SSD), and data on HDD, all are with qcow2 format

Everything work fine, but until I see a problem is, when I delete a VM, because of "wipe Removed Volume" enabled so proxmox will try to zero out the deleted lv in lvm. But even with throughput set to 1Gbps, it took about 5 minutes to clean a 20G disk on SSD
And during this time (during proxmox zero out the deleted disk) all other VMs, same or difference host, getting IO lag very bad, I mean, a VM even got to remount it's filesystem to readonly.

I know the multipath connection with 1Gbps each is bad, but, does it really bad that delete a 20G disk took that long? it even affect other VMs performance also.

Does anyone having this kind of storage setup seeing this issue? I dont have enough equipment to test on higher bandwidth network NIC yet but I wonder issue will less noisy with 10Gbps or 25GBps NIC? should I lower the throughput to like about 50% of the NIC bw?

I'm testing solution for my team to transfer from VMware to proxmox and we have to make use of our current SAN storage. but I really stuck with this kind of problem in my homelab for weeks now without solution, someone may able to test/help on this please?

Thanks and Regards.
 
Last edited: