Can't delete vm-disk from ZFS storage


New Member
Jul 11, 2015
I was trying to test restore of VM from backup.
I have some errors during restoration (my question is not about this errors) and after that I have filled up my ZFS storage for VM by such disks in Storage/Content tab of VM:

The size of this disk is 100Gb. It is equal to the size of VM.
The "Remove" button is greyed.

My question is:
How can I remove this disk images and free disk space?

The lvremove command is not working because of this storage is on ZFS and vgdisplay show only LV pve on which Proxmox is installed.
root@vmpve:/home/fes# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 68.21 GiB
PE Size 4.00 MiB
Total PE 17461
Alloc PE / Size 15286 / 59.71 GiB
Free PE / Size 2175 / 8.50 GiB
VG UUID PbNG4Z-FfsZ-wfx5-G71X-08Uc-YpVt-oz1dYf

There is more strange things:

root@vmpve:/home/fes# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 3.2G 416K 3.2G 1% /run
/dev/mapper/pve-root 17G 1.1G 15G 7% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 6.3G 25M 6.3G 1% /run/shm
/dev/mapper/pve-data 33G 11G 23G 32% /var/lib/vz
/dev/fuse 30M 16K 30M 1% /etc/pve
sata_raid 26G 128K 26G 1% /sata_raid
zfs146gb 134G 53G 82G 40% /zfs146gb

df -f shows only 26Gb size of "sata_raid" which is ZFS RAID-1 and it have 600 Gb size.

root@vmpve:/home/fes# zpool list
sata_raid 596G 147G 449G - 16% 24% 1.00x ONLINE -
zfs146gb 136G 52.8G 83.2G - 26% 38% 1.00x ONLINE -

zpool list shows 449G free on "sata_raid" pool.

I founded this post
but that solution is not for me because of using ZFS.
Here the answer for some configuration questions:

root@vmpve:/home/fes# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 68.21g 8.50g
root@vmpve:/home/fes# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 68.21g 8.50g
root@vmpve:/home/fes# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
data pve -wi-ao--- 32.71g
root pve -wi-ao--- 17.00g
swap pve -wi-ao--- 10.00g
root@vmpve:/home/fes# dmsetup ls --tree
pve-swap (253:1)
`- (8:3)
pve-root (253:0)
`- (8:3)
pve-data (253:2)
`- (8:3)
root@vmpve:/home/fes# cat /etc/pve/qemu-server/101.conf
memory: 128

root@vmpve:/home/fes# ls -lsa /dev/mapper/
total 0
0 drwxr-xr-x 2 root root 120 Jul 22 05:35 .
0 drwxr-xr-x 18 root root 6860 Jul 22 05:35 ..
0 crw------- 1 root root 10, 61 Jul 22 05:35 control
0 lrwxrwxrwx 1 root root 7 Jul 22 05:35 pve-data -> ../dm-2
0 lrwxrwxrwx 1 root root 7 Jul 22 05:35 pve-root -> ../dm-0
0 lrwxrwxrwx 1 root root 7 Jul 22 05:35 pve-swap -> ../dm-1

root@vmpve:/home/fes# pvesm list zfs
zfs:base-100-disk-1 raw 107374182400 100
zfs:vm-101-disk-1 raw 107374182400 101
zfs:vm-101-disk-2 raw 107374182400 101
zfs:vm-101-disk-3 raw 107374182400 101
zfs:vm-102-disk-1 raw 107374182400 102

Thank you.
Last edited:


New Member
Jul 11, 2015
I found solution!
I ran
qm rescan
command and unused VM disk appeared in VM hardware list. So i can delete them by using "Remove" button.


  • 3.jpg
    95 KB · Views: 163
  • 3.jpg
    78.1 KB · Views: 133


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!