TrueNAS created a ZVol, Sparse: on; created a iSCSI target, Disable Physical Block Size Reporting: off.
PVE mapped iSCSI target, "format" and use it on VM, works no problem.
on VM creation, one than one vm-<ID>-disk-<ID> will be created.
For example 1G disk created.
The issue is after removed VM, or VM disks,
despite PVE side shows 0, TrueNAS side still displays 1G
----------
on formatting, I tried all. ZFS , LVM or LVM-Thin.
First, i mapped a iSCSI target.
Then
ZFS
- not supported from UI, but still workable through command.
- cli to create a zfs pool first
- from UI now i can create ZFS from this pool
LVM-Thin
- not supported from UI, but still workable through command.
- cli to create pv -> vg -> thin-pool first
- from UI now i can create LVM-Thin from this pool
- option discard on
LVM
- supported form UI, so i simple created it
- wipe removed volumes: on
only when LVM with "Wipe Removed Volumes" on, will reflect to LUN side after long long time wiping.....
despite
pvesm set <LVM> --saferemove_throughput 1048576000
can improve speed but still slow if LVM is large !!!
and furthermore, if a VM created more than 1 VM disks on this LVM,
on VM removal, it create more than 1 wiping tasks ( a disk a task) wipping the same LUN the same time, it eventually causing crash on LUN!!!!
.
.
.
33533382656 B 31.2 GB 523.9 s (8:43 min) 64001764 B/s 61.04 MB/s
zero out finished (note: 'No space left on device' is ok here): write: No space left on device
Volume group "pve1vm22x-lp" not found <------------ HERE
TASK ERROR: lvremove 'pve1vm22x-lp/del-vm-22000-cloudinit' error: Cannot process volume group pve1vm22x-lp
HERE: LUN source already disappeared at this point,
i tested , one by one VM Disk removal, still fine, VM -> Remove = wiping all disks the same time, 100% crash !
---
If this is not a bug then solution needed !!
If there is requirement on LUN source allow discard to pass down, i hope Proxmox can specify it, i think its lacking information on this part...
size unequal on 2 side is not acceptable......
Yet, this should be a simple and common practice, why only few people try to debug it, people don't care ???
I’m running a production cluster based on Proxmox VE 8.x with the following setup:
When I delete a VM from the Proxmox GUI, the allocated space is not...
PVE mapped iSCSI target, "format" and use it on VM, works no problem.
on VM creation, one than one vm-<ID>-disk-<ID> will be created.
For example 1G disk created.
The issue is after removed VM, or VM disks,
despite PVE side shows 0, TrueNAS side still displays 1G
----------
on formatting, I tried all. ZFS , LVM or LVM-Thin.
First, i mapped a iSCSI target.
Then
ZFS
- not supported from UI, but still workable through command.
- cli to create a zfs pool first
- from UI now i can create ZFS from this pool
LVM-Thin
- not supported from UI, but still workable through command.
- cli to create pv -> vg -> thin-pool first
- from UI now i can create LVM-Thin from this pool
- option discard on
LVM
- supported form UI, so i simple created it
- wipe removed volumes: on
only when LVM with "Wipe Removed Volumes" on, will reflect to LUN side after long long time wiping.....
despite
pvesm set <LVM> --saferemove_throughput 1048576000
can improve speed but still slow if LVM is large !!!
and furthermore, if a VM created more than 1 VM disks on this LVM,
on VM removal, it create more than 1 wiping tasks ( a disk a task) wipping the same LUN the same time, it eventually causing crash on LUN!!!!
.
.
.
33533382656 B 31.2 GB 523.9 s (8:43 min) 64001764 B/s 61.04 MB/s
zero out finished (note: 'No space left on device' is ok here): write: No space left on device
Volume group "pve1vm22x-lp" not found <------------ HERE
TASK ERROR: lvremove 'pve1vm22x-lp/del-vm-22000-cloudinit' error: Cannot process volume group pve1vm22x-lp
HERE: LUN source already disappeared at this point,
i tested , one by one VM Disk removal, still fine, VM -> Remove = wiping all disks the same time, 100% crash !
---
If this is not a bug then solution needed !!
If there is requirement on LUN source allow discard to pass down, i hope Proxmox can specify it, i think its lacking information on this part...
size unequal on 2 side is not acceptable......
Yet, this should be a simple and common practice, why only few people try to debug it, people don't care ???
Infrastructure Overview
I’m running a production cluster based on Proxmox VE 8.x with the following setup:
- Cluster: 4 × HPE DL380 Gen10 nodes
- Shared Storage: HPE MSA 2050 configured in Virtual Storage mode
- A single iSCSI LUN is presented to all nodes using multipath
- Proxmox storage is configured as LVM (non-thin) on top of this iSCSI LUN, named storage-msa
- All VM disks are stored on this shared volume
Issue
When I delete a VM from the Proxmox GUI, the allocated space is not...
- ugo
- Replies: 10
- Forum: Proxmox VE: Installation and configuration
Last edited: