Duplication of a virtual disk on LVM thin

plastilin

Renowned Member
Oct 9, 2012
108
5
83
Ukraine
Hello. Have a problem. There is a virtual machine which has 2 disks, one for 60 GB, and a drive for 100 GB. Disks are stored on the local lvm thin storage. In the configuration file of the virtual machine, one 60 GB disk is indicated, and on the storage there are two 60 GB disks associated with this machine. What kind of problem is this, how to delete a ghost disk, and is it even possible to delete it?

proxmox virtual machine config.png
proxmox lvmthin ghost.png
 
Last edited:
Posted.

Code:
root@node01:~# lvs
  LV             VG   Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root           pve  -wi-ao----  108,32g                                                  
  swap           pve  -wi-ao----    2,00g                                                  
  raid           raid twi-aotz--   <5,26t             8,75   1,81                          
  vm-111-disk-0  raid Vwi-aotz--  100,00g raid        100,00                               
  vm-111-disk-1  raid Vwi-a-tz--   60,00g raid        33,58                                
  vm-111-disk-2  raid Vwi-aotz--   60,00g raid        100,00                               
  vm-1119-disk-0 raid Vwi-aotz--  150,00g raid        67,14                                
  vm-5005-disk-0 raid Vwi-aotz--  100,00g raid        45,91                                
  vm-5005-disk-1 raid Vwi-aotz--  250,00g raid        57,74                                
  ssd            ssd  twi-aotz-- <458,73g             54,49  2,79                          
  vm-5005-disk-0 ssd  Vwi-aotz--  250,00g ssd         99,97

Config

Code:
root@node01:~# cat /etc/pve/qemu-server/111.conf
agent: 1
boot: cdn
bootdisk: virtio0
cores: 4
cpu: host
machine: q35
memory: 16384
name: win
net0: virtio=52:F0:AE:AB:78:99,bridge=vmbr1,tag=2102
numa: 0
ostype: win10
sata0: none,media=cdrom
scsihw: virtio-scsi-pci
smbios1: uuid=125a3e25-5ec2-4a2e-9938-3cfb85f761b4
sockets: 1
virtio0: local-data-raid:vm-111-disk-0,cache=writeback,format=raw,size=100G
virtio1: local-data-raid:vm-111-disk-2,format=raw,size=60G
vmgenid: 838393ba-34cf-4de4-a9d6-e3831d23b4f8
 
It's odd that the disk ending with disk-1 is not shown as unused in the GUI, normally it should.

If you really know what you are doing and want to get rid of the volume:

add a new entry to the config file:
Code:
unused0: local-data-raid:vm-111-disk-1
then if should be visible and can be deleted via the GUI

or

Manually
Code:
lvremove raid/vm-111-disk-1
 
If add to config parameter unused0, i get errot

Code:
Parameter verification failed. (400)

unused0: unable to hotplug unused0: lvremove 'raid/vm-111-disk-1' error: Logical volume raid/vm-111-disk-1 contains a filesystem in use.

I f i try delete manually from console i have the same error

Code:
root@node01:~# lvremove -f /dev/raid/vm-111-disk-01
  Logical volume raid/vm-111-disk-1 contains a filesystem in use.
 
If add to config parameter unused0, i get errot

Code:
Parameter verification failed. (400)

unused0: unable to hotplug unused0: lvremove 'raid/vm-111-disk-1' error: Logical volume raid/vm-111-disk-1 contains a filesystem in use.

I f i try delete manually from console i have the same error

Code:
root@node01:~# lvremove -f /dev/raid/vm-111-disk-01
  Logical volume raid/vm-111-disk-1 contains a filesystem in use.

Please check if the filesystem is mounted with df -PHT.
 
Ok. I have 2 ghost disks on 2 VM`s (1 is QEMU, but 2 is LXC) on QEMU i have this result on node

Code:
root@node01:~# df -PHT
Файловая система               Тип      Размер Использовано  Дост Использовано% Cмонтировано в
udev                           devtmpfs    51G            0   51G            0% /dev
tmpfs                          tmpfs       11G          35M   11G            1% /run
/dev/mapper/pve-root           ext4       115G         2,8G  107G            3% /
tmpfs                          tmpfs       51G          73M   51G            1% /dev/shm
tmpfs                          tmpfs      5,3M            0  5,3M            0% /run/lock
tmpfs                          tmpfs       51G            0   51G            0% /sys/fs/cgroup
/dev/sda1                      ext4       2,2T          84M  2,1T            1% /mnt/pve/local-backup
/dev/fuse                      fuse        32M          58k   32M            1% /etc/pve
10.10.10.1:/tank/proxmoxbackup nfs        5,8T         4,3T  1,6T           73% /mnt/pve/nas01nfs
tmpfs                          tmpfs       11G            0   11G            0% /run/user/0

On LXC i have this result

Code:
root@node02:~# df -PHT
Файловая система                   Тип      Размер Использовано  Дост Использовано% Cмонтировано в
udev                               devtmpfs    51G            0   51G            0% /dev
tmpfs                              tmpfs       11G          27M   11G            1% /run
/dev/mapper/pve-root               ext4       115G          26G   84G           24% /
tmpfs                              tmpfs       51G          66M   51G            1% /dev/shm
tmpfs                              tmpfs      5,3M            0  5,3M            0% /run/lock
tmpfs                              tmpfs       51G            0   51G            0% /sys/fs/cgroup
/dev/sda1                          ext4       2,2T          84M  2,1T            1% /mnt/pve/local-backup
/dev/fuse                          fuse        32M          58k   32M            1% /etc/pve
10.10.10.1:/tank/proxmoxbackup     nfs        5,8T         4,3T  1,6T           73% /mnt/pve/nas01nfs
/dev/mapper/raid-vm--1118--disk--0 ext4       1,2T         8,9G  1,1T            1% /var/lib/lxc/1118/rootfs
tmpfs                              tmpfs       11G            0   11G            0% /run/user/0

Probles with LXC it`s really mouned /on /var/lib/lxc/1118/rootfs, but QEMU VM disk not mount any.

Additional data on LXC problem

Code:
root@node02:~# lvs
  LV             VG   Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root           pve  -wi-ao----  108,32g                                                   
  swap           pve  -wi-ao----    2,00g                                                   
  raid           raid twi-aotz--   <5,26t             22,45  4,27                           
  vm-1000-disk-0 raid Vwi-aotz--   50,00g raid        15,53                                 
  vm-1000-disk-1 raid Vwi-aotz--  120,00g raid        0,01                                   
  vm-1002-disk-0 raid Vwi-aotz--   25,00g raid        5,95                                   
  vm-1006-disk-0 raid Vwi-aotz--   70,00g raid        100,00                                 
  vm-1006-disk-1 raid Vwi-aotz--  350,00g raid        100,00                                 
  vm-1007-disk-0 raid Vwi-aotz--   20,00g raid        100,00                                 
  vm-1008-disk-0 raid Vwi-aotz--   50,00g raid        100,00                                 
  vm-101-disk-0  raid Vwi-aotz--  150,00g raid        100,00                                 
  vm-102-disk-0  raid Vwi-aotz--   51,00g raid        100,00                                 
  vm-103-disk-0  raid Vwi-aotz--   10,00g raid        100,00                                 
  vm-104-disk-0  raid Vwi-aotz--   10,00g raid        100,00                                 
  vm-106-disk-0  raid Vwi-aotz--   25,00g raid        100,00                                 
  vm-107-disk-0  raid Vwi-aotz--   50,00g raid        100,00                                 
  vm-108-disk-0  raid Vwi-aotz--   50,00g raid        100,00                                 
  vm-109-disk-0  raid Vwi-aotz--  120,00g raid        100,00                                 
  vm-110-disk-0  raid Vwi-aotz--   25,00g raid        100,00                                 
  vm-1118-disk-0 raid Vwi-aotz--    1,07t raid        2,40                                   
  vm-1118-disk-1 raid Vwi-aotz--   10,00g raid        47,45                                
  vm-112-disk-0  raid Vwi-aotz--   12,00g raid        100,00                                 
  vm-113-disk-0  raid Vwi-aotz--   50,00g raid        100,00                                 
  vm-5000-disk-0 raid Vwi-aotz--  100,00g raid        100,00                                 
  vm-5002-disk-0 raid Vwi-a-tz--   60,00g raid        34,14                                 
  vm-5003-disk-0 raid Vwi-a-tz--  100,00g raid        4,25                                   
  ssd            ssd  twi-aotz-- <458,73g             0,00   0,35

LXC config

Code:
root@node02:~# cat /etc/pve/lxc/1118.conf
arch: i386
cores: 2
hostname: vm.domain.local
memory: 512
nameserver: 8.8.8.8
net0: name=eth0,bridge=vmbr1,gw=XXX.XXX.XXX.XXX,hwaddr=72:BE:8B:41:0A:A0,ip=YYY.YYY.YYY.YYY/27,tag=543,type=veth
net1: name=eth1,bridge=vmbr1,hwaddr=3A:30:38:F1:29:05,ip=192.168.YY.YY/24,tag=304,type=veth
ostype: gentoo
rootfs: local-data-raid:vm-1118-disk-1,size=10G
searchdomain: domain.local
swap: 512
 
Result for 111 ID

Code:
root@node01:~# ps auxf | grep 111
root     24117  0.0  0.0   6092   888 pts/0    S+   10:26   0:00  |                   \_ grep 111
root     17353 26.6 16.9 17812536 16804696 ?   Sl   мар20 957:18 /usr/bin/kvm -id 111 -name win -chardev socket,id=qmp,path=/var/run/qemu-server/111.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/111.pid -daemonize -smbios type=1,uuid=125a3e25-5ec2-4a2e-9938-3cfb85f761b4 -smp 4,sockets=1,cores=4,maxcpus=4 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vnc unix:/var/run/qemu-server/111.vnc,password -no-hpet -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,hv_ipi -m 16384 -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device vmgenid,guid=838393ba-34cf-4de4-a9d6-e3831d23b4f8 -device usb-tablet,id=tablet,bus=ehci.0,port=1 -device VGA,id=vga,bus=pcie.0,addr=0x1 -chardev socket,path=/var/run/qemu-server/111.qga,server,nowait,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:fa70965f4aeb -drive file=/dev/raid/vm-111-disk-0,if=none,id=drive-virtio0,format=raw,cache=writeback,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -drive file=/dev/raid/vm-111-disk-2,if=none,id=drive-virtio1,format=raw,cache=none,aio=native,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb -device ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7 -drive if=none,id=drive-sata0,media=cdrom,aio=threads -device ide-cd,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=200 -netdev type=tap,id=net0,ifname=tap111i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=52:F0:AE:AB:78:99,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -rtc driftfix=slew,base=localtime -machine type=pc-q35-4.1+pve0 -global kvm-pit.lost_tick_policy=discard -incoming unix:/run/qemu-server/111.migrate -S

Result for 1118 ID

Code:
root@node02:~# ps auxf | grep 1118
root     18129  0.0  0.0   6092   828 pts/0    S+   10:29   0:00              \_ grep 1118
root      6022  0.0  0.0   3772  2956 ?        Ss   мар22   0:06 [lxc monitor] /var/lib/lxc 1118
root      6665  0.0  0.0   2400  1556 ?        Ss   мар22   0:00 /usr/bin/dtach -A /var/run/dtach/vzctlconsole1118 -r winch -z lxc-console -n 1118 -e -1
root      6666  0.0  0.0   3688  2696 pts/4    Ss+  мар22   0:00  \_ lxc-console -n 1118 -e -1
 
I managed to remove unnecessary disks. This was only possible after a physical reboot of the hosts. But I don't like this solution.
 
Last edited:
I managed to remove unnecessary disks. This was only possible after a physical reboot of the hosts. But I don't like this solution.

Yes, that always helps, but is indeed not a viable option. I also experienced this occasionally on some servers with snapshot removal/rollback.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!