iSCSI LUN size wont reclaim and possible bug

m912

New Member
Oct 24, 2025
6
0
1
TrueNAS created a ZVol, Sparse: on; created a iSCSI target, Disable Physical Block Size Reporting: off.

PVE mapped iSCSI target, "format" and use it on VM, works no problem.

on VM creation, one than one vm-<ID>-disk-<ID> will be created.

For example 1G disk created.

The issue is after removed VM, or VM disks,

despite PVE side shows 0, TrueNAS side still displays 1G

----------

on formatting, I tried all. ZFS , LVM or LVM-Thin.

First, i mapped a iSCSI target.

Then

ZFS
- not supported from UI, but still workable through command.
- cli to create a zfs pool first
- from UI now i can create ZFS from this pool

LVM-Thin
- not supported from UI, but still workable through command.
- cli to create pv -> vg -> thin-pool first
- from UI now i can create LVM-Thin from this pool
- option discard on

LVM
- supported form UI, so i simple created it
- wipe removed volumes: on

only when LVM with "Wipe Removed Volumes" on, will reflect to LUN side after long long time wiping.....

despite

pvesm set <LVM> --saferemove_throughput 1048576000

can improve speed but still slow if LVM is large !!!

and furthermore, if a VM created more than 1 VM disks on this LVM,

on VM removal, it create more than 1 wiping tasks ( a disk a task) wipping the same LUN the same time, it eventually causing crash on LUN!!!!

.
.
.
33533382656 B 31.2 GB 523.9 s (8:43 min) 64001764 B/s 61.04 MB/s
zero out finished (note: 'No space left on device' is ok here): write: No space left on device
Volume group "pve1vm22x-lp" not found <------------ HERE
TASK ERROR: lvremove 'pve1vm22x-lp/del-vm-22000-cloudinit' error: Cannot process volume group pve1vm22x-lp

HERE: LUN source already disappeared at this point,

i tested , one by one VM Disk removal, still fine, VM -> Remove = wiping all disks the same time, 100% crash !

---

If this is not a bug then solution needed !!

If there is requirement on LUN source allow discard to pass down, i hope Proxmox can specify it, i think its lacking information on this part...
size unequal on 2 side is not acceptable......

Yet, this should be a simple and common practice, why only few people try to debug it, people don't care ???

 
Last edited:
TrueNAS created a ZVol, Sparse: on; created a iSCSI target, Disable Physical Block Size Reporting: off.

PVE mapped iSCSI target, "format" and use it on VM, works no problem.

on VM creation, one than one vm-<ID>-disk-<ID> will be created.

For example 1G disk created.

The issue is after removed VM, or VM disks,

despite PVE side shows 0, TrueNAS side still displays 1G
What storage on the PVE side did you use for this test, so what "format"?

With ZFS and iSCSI on the SAN side, the best to use it is ZFS-over-iSCSI if the storage supports it. I lost track of if True or Free NAS is the one for which an implementation exists. You may look into that.

Besides that, yes trimming or blkdiscarding is necessary if you want to give the space back to the backend storage device.
 
What storage on the PVE side did you use for this test, so what "format"?

With ZFS and iSCSI on the SAN side, the best to use it is ZFS-over-iSCSI if the storage supports it. I lost track of if True or Free NAS is the one for which an implementation exists. You may look into that.

Besides that, yes trimming or blkdiscarding is necessary if you want to give the space back to the backend storage device.
As I mentioned, i have already tried to do all kind of format: LVM , LVM-Thin, ZFS.... Whether to use "ZFS-over-iSCSI" not, this fundamentally doing same as i do, "Add iSCCI target + Create ZFS Pool + then create ZFS Dataset". Beside the default "ZFS-over-iSCSI" only support few provider for this automation.

So now the issue is "discard" wont work on ZFS or LVM-Thin over iSCSI, and LVM --saferemove 1 --saferemove_throughput >1 = too slow and will 100% possibly cause crash.

I suggest you test the same on your PVE. I didnt do single magic to encounter this issue. As the evidence, i am not the only one.

----

PS, i have already tried asking solution from ChatGPT, Gemini, Deepseek, non of them can provide solution, therefore i am seeking wise awesome people give me some advises and if Proxmox official can give solution is great!
 
Last edited: