VM disk deletes, but when new VM created, it comes back

Hanrick

New Member
Apr 23, 2024
3
0
1
Heyy!

I was searching for 2 days, but could not find the solution for this issue and i didn't even seen the same issue.
Here is the situation:
I have installed a windows server 2025 in a vm (vm102) and after removing the vm with purge, it seamed to be removed, but when I created a new vm, the disk came back from death and booted the windows, like it was never deleted. I even restarted the server, but did not solve the issue. This an LVM that made of 4 SSDs with a harware raid controller. (RAID 5) I also could not find the mount point or could mount it. (I don't really understand LVM yet), but what I can see from lvs command, it seams to delete the disk when i delete the vm.

I am complately lost at this point. Do you guys have any clue what appened and how can i solve this issue?
Thanks for the help! <3

lvs command before and after removing the vm.
Bash:
root@bukk:/dev# lvs Data
  LV                  VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-100-disk-0.qcow2 Data -wi-ao----   8.00g                                                  
  vm-100-disk-1.qcow2 Data -wi-ao---- 256.04g                                                  
  vm-101-disk-0.qcow2 Data -wi-------   4.00m                                                  
  vm-101-disk-1.qcow2 Data -wi------- 128.02g                                                  
  vm-101-disk-2       Data -wi-------   4.00m                                                  
  vm-102-disk-0.qcow2 Data -wi-ao----   4.00m                                                  
  vm-102-disk-1.qcow2 Data -wi-ao---- 128.02g                                                  
  vm-102-disk-2       Data -wi-ao----   4.00m                                                  
root@bukk:/dev# lvs Data
  LV                  VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-100-disk-0.qcow2 Data -wi-ao----   8.00g                                                  
  vm-100-disk-1.qcow2 Data -wi-ao---- 256.04g                                                  
  vm-101-disk-0.qcow2 Data -wi-------   4.00m                                                  
  vm-101-disk-1.qcow2 Data -wi------- 128.02g                                                  
  vm-101-disk-2       Data -wi-------   4.00m

lsblk result. (My vms are stored on sda)
Bash:
root@bukk:/dev# lsblk
NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                             8:0    0   2.6T  0 disk
├─Data-vm--100--disk--0.qcow2 252:2    0     8G  0 lvm
└─Data-vm--100--disk--1.qcow2 252:3    0   256G  0 lvm
sdb                             8:16   0   2.7T  0 disk
└─OVDAT-vm--100--disk--0      252:4    0   2.7T  0 lvm
sdc                             8:32   0 111.8G  0 disk
├─sdc1                          8:33   0  1007K  0 part
├─sdc2                          8:34   0     1G  0 part /boot/efi
└─sdc3                          8:35   0 110.8G  0 part
  ├─pve-swap                  252:0    0     8G  0 lvm  [SWAP]
  └─pve-root                  252:1    0 102.8G  0 lvm  /
 
Have you checked this option:
Code:
LVM BACKEND
       Storage pool type: lvm

       LVM is a light software layer on top of hard disks and partitions. It can be used to split available disk space into smaller logical volumes. LVM is widely used on Linux and makes managing hard drives easier.

       Another use case is to put LVM on top of a big iSCSI LUN. That way you can easily manage space on that iSCSI LUN, which would not be possible otherwise, because the iSCSI specification does not define a management
       interface for space allocation.

   Configuration
       The LVM backend supports the common storage properties content, nodes, disable, and the following LVM specific properties:

       saferemove  <<<<<<<<<<<<<<<<<<<<<<<<<
           Called "Wipe Removed Volumes" in the web UI. Zero-out data when removing LVs. When removing a volume, this makes sure that all data gets erased and cannot be accessed by other LVs created later (which happen to be
           assigned the same physical extents). This is a costly operation, but may be required as a security measure in certain environments.

       saferemove_throughput <<<<<<<<<<<<<<<<<<<<<<
           Wipe throughput (cstream -t parameter value



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox