I have proxmox 6.4-13
our storage for vm disks is on LVM on iSCSI on multipath
our vms are created through full clowning a vm template with proxmox api.
Then we have random problems when deleting a vm through proxmox api :
the vm is removed, the LVs of the disks are removed, but the dm devices (dmsetup ls) are still here. So if we want to define a new vm with the same vmid, proxmox complains while creating the vm since a device is here.
as a remediation, I have done a script to compare the list of dmdevice (dmsetup -ls) and the list of LVs in the storage (lvs) to build the dmsetup remove for all devices that have no LVs.
did anyone have such a behaviour with clning and removing lots of vm on lvm storage ?
our storage for vm disks is on LVM on iSCSI on multipath
our vms are created through full clowning a vm template with proxmox api.
Then we have random problems when deleting a vm through proxmox api :
the vm is removed, the LVs of the disks are removed, but the dm devices (dmsetup ls) are still here. So if we want to define a new vm with the same vmid, proxmox complains while creating the vm since a device is here.
as a remediation, I have done a script to compare the list of dmdevice (dmsetup -ls) and the list of LVs in the storage (lvs) to build the dmsetup remove for all devices that have no LVs.
did anyone have such a behaviour with clning and removing lots of vm on lvm storage ?
Last edited: