Issue with Deleting VMs in proxmox environment/ Use luns directly?

2000gtacoma

New Member
Aug 31, 2025
22
2
3
Currently running proxmox 9.1.6 on a 5 node cluster. The cluster is connect to an iscsi Dell san.
When I delete a vm (ex. vm141), I tick both boxes to purge from job configurations and destroy unreferenced disked owned by the guest. If I then create a new vm and reuse the 141 vm id and then attempt to boot into the iso, I will get the previous vms disk still. For example, I originally had a Windows vm running on vmid 141. I no longer needed that vm and deleted as mentioned above. I then needed to create a linux vm. When I clicked to create a new vm, vm id 141 was automatically populated, so I simply rolled with that ID. I had no reason to change it. Created my disk (different size. Windows was 200gb and linux was 50gb). Upon Starting the vm and expecting to boot to iso or stall (boot order), I was given a recovery for windows. I've tested this a couple times and I can recreate.

Anyone had this issue? Any ideas?

Not sure if this matters, but it also appears I have a setting enabled that could cause issues that I may not need to have enabled. Under DataCenter>Storage>DellME5024 (Iscsi)>Use Luns directly is ticked.

Should Use luns directly be ticked? Also if no vms are using the luns directly is there any risk disabling this?

Also if I run qm rescan I get the following

root@mypve01:~# qm rescan
rescan volumes...
failed to stat '/dev/datastore0/vm-110-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-105-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-117-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-104-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-123-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-109-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-108-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-116-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-102-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-116-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-102-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-109-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-123-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-117-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-104-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-111-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-105-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-110-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-122-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-136-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-129-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-128-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-103-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-124-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-130-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-137-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-137-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-125-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-124-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-131-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-130-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-136-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-129-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-128-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-122-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-100-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-115-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-200-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-101-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-114-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-119-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-118-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-112-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-133-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-119-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-118-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-114-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-107-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-115-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-100-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-200-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-110-disk-2.qcow2'
failed to stat '/dev/datastore0/vm-101-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-138-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-139-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-132-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-113-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-128-disk-2.qcow2'
failed to stat '/dev/datastore0/vm-127-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-121-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-134-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-120-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-121-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-134-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-120-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-127-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-135-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-113-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-126-disk-0.qcow2'
failed to stat '/dev/datastore0/vm-139-disk-1.qcow2'
failed to stat '/dev/datastore0/vm-138-disk-0.qcow2'
root@mypve01:~#
 
Hi @2000gtacoma ,
You should look into "saferemove" option for storage pool (man pvesm // search for saferemove).

I believe when an LVM/qcow is removed, the standard option is just to delete the metadata. The actual user data is still left on the disk. Obviously zeroing out the disk, especially a large one, will take a while. And this operation may need a cluster wide storage lock.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: UdoB
Hi @2000gtacoma ,
You should look into "saferemove" option for storage pool (man pvesm // search for saferemove).

I believe when an LVM/qcow is removed, the standard option is just to delete the metadata. The actual user data is still left on the disk. Obviously zeroing out the disk, especially a large one, will take a while. And this operation may need a cluster wide storage lock.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
What exactly does the saferemove option do? Does it zero the entire disk or look for stale data and remove?
 
What exactly does the saferemove option do? Does it zero the entire disk or look for stale data and remove?
The man page states:
--saferemove <boolean>
Zero-out data when removing LVs.

It zero's out the blocks which were occupied by a particular LV, not the entire physical disk.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
You should not


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I do have multiple backups. I have 2 PBS servers replicating in 2 different locations. Also push to AWS S3. Is there any risk of me unticking that box and hitting apply? I realize you are not sitting in front of my system and cannot verify every angle. Apologies for asking so many questions. Simply trying to learn and your time and help is much appreciated.
 
Is there any risk of me unticking that box and hitting apply?
It is hard to say what the risk profile is. You would need to examine what this storage is, what portion of it is used and in what way.
If nothing is actually using the storage or Direct LUNs, then unchecking it will not hurt. A Direct LUN means that you took an iSCSI LUN and passed it through directly to VM. No PVE LVM management, no QCOW. You can examine your VM hardware configuration to see if you use any of them in such a way.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
It is hard to say what the risk profile is. You would need to examine what this storage is, what portion of it is used and in what way.
If nothing is actually using the storage or Direct LUNs, then unchecking it will not hurt. A Direct LUN means that you took an iSCSI LUN and passed it through directly to VM. No PVE LVM management, no QCOW. You can examine your VM hardware configuration to see if you use any of them in such a way.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I have no passed a lun directly through to a VM. All of my disk look like the following. Some vm do have two disks and obviously the size changes as needed.1774986133393.png
 
What is the context of your /etc/pve/storage.cfg?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
See below. I sanitized the config some. Just keep that in mind. All disks are actually pointed to the lvm:Datastore0



root@"sanitized":~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,iso,backup

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

iscsi: DellME5024
portal x.x.x.x ("sanitized")
target iqn.1988-11.com.dell:01.array.bc305b69756c
content images

lvm: Datastore0
vgname datastore0
base DellME5024:0.0.1.scsi-3600c0ff000fc5bb09ef6be6801000000
content images,rootdir
saferemove 0
shared 1
snapshot-as-volume-chain 1

esxi: esxi06
server "sanitized"
username root
content import
skip-cert-verification 1

pbs: TBPBS
datastore tbpbsbackups
server "sanitized"
content backup
encryption-key "sanitized"
fingerprint "sanitized"
prune-backups keep-all=1
username "sanitized"

pbs: TB-P1
datastore tbpbsbackups
server "sanitized"
content backup
encryption-key "sanitized"
fingerprint "sanitized"
namespace TB-P1
prune-backups keep-all=1
username "sanitized"

pbs: TB-P2
datastore tbpbsbackups
server "sanitized"
content backup
encryption-key "sanitized"
fingerprint "sanitized"
namespace TB-P2
prune-backups keep-all=1
username "sanitized"

pbs: TB-P3
datastore pbsbackups
server "sanitized"
content backup
encryption-key "sanitized"
fingerprint "sanitized"
namespace TB-P3
prune-backups keep-all=1
username "sanitized"

pbs: AWS-S3-Backup
datastore AWS-S3-Backup
server "sanitized"
content backup
encryption-key "sanitized"
fingerprint "sanitized"
prune-backups keep-all=1
username "sanitized"

pbs: TB-AWS-P1
datastore AWS-S3-Backup
server "sanitized"
content backup
encryption-key "sanitized"
fingerprint "sanitized"
namespace TB-AWS-P1
prune-backups keep-all=1
username "sanitized"

pbs: TB-AWS-P2
datastore AWS-S3-Backup
server "sanitized"
content backup
encryption-key "sanitized"
fingerprint "sanitized"
namespace TB-AWS-P2
prune-backups keep-all=1
username "sanitized"

pbs: TB-AWS-P3
datastore AWS-S3-Backup
server "sanitized"
content backup
encryption-key "sanitized"
fingerprint "sanitized"
namespace TB-AWS-P3
prune-backups keep-all=1
username "sanitized"

root@tbpve02:~#
 
So I can set that flag through GUI, by ticking the Wipe Removed Volumes. See screenshots. I am referring to the actual iscsi where use luns directly is. See attached.Screenshot 2026-03-31 160418.pngScreenshot 2026-03-31 160557.png
 
It should be fine to tick one and untick the other.

No warranty, express or implied, regarding the results :-) If you need a more deterministic answer - purchasing a support subscription and opening the case with your Hypervisor vendor is the way to go.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
So ticked the wipe disk and unticked use luns directly on one of my less critical environments. Unfortunately I don't have shared storage in my lab/sandbox.

Also appears the failing to stat vm are mostly cosmetic and not a real issue. Proxmox looks at one path but the disk is actually another? This is due to running a shared iscsi with lvm on top and a quirk. Not totally sure here. I'm not getting any errors on proxmox or performance issues. Everything actually runs really well.