Orphaned fleece drives filled up my drive - now it's too full to boot.

sammeeeeeee

New Member
May 11, 2024
6
1
3
Hiya!

I had to force stop one of my backups, which I believe resulted in orphaned fleece drives. This is what it looked like:
1715402413358.png

I couldn't delete them (it told me to delete from the hardware tab underneath each VM, but they weren't there), so I thought, as with all things, a restart might help.

Unfortunately, after the reboot, I could not get into the webGUI, and the VM's didn't start.

Here is the output of `zfs list`:
```
Code:
root@pxmx01:~# zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
Backup                        1.23M   430G    96K  /Backup
rpool                          430G   750M   104K  /rpool
rpool/ROOT                     330G   750M    96K  /rpool/ROOT
rpool/ROOT/pve-1               330G   750M   330G  /
rpool/data                    76.9G   750M   104K  /rpool/data
rpool/data/subvol-107-disk-0  1.07G   750M  1.07G  /rpool/data/subvol-107-disk-0
rpool/data/vm-101-disk-0      1.53G   750M  1.53G  -
rpool/data/vm-102-disk-0      34.6G   750M  34.6G  -
rpool/data/vm-102-fleece-0    55.2M   750M  55.2M  -
rpool/data/vm-103-disk-0        84K   750M    84K  -
rpool/data/vm-103-disk-1      15.7G   750M  15.7G  -
rpool/data/vm-103-disk-2        64K   750M    64K  -
rpool/data/vm-103-fleece-0      56K   750M    56K  -
rpool/data/vm-104-disk-0      19.0G   750M  19.0G  -
rpool/data/vm-104-fleece-0      56K   750M    56K  -
rpool/data/vm-105-disk-0      4.98G   750M  4.34G  -
rpool/data/vm-105-fleece-0      56K   750M    56K  -
rpool/data/vm-106-disk-0        56K   750M    56K  -
rpool/var-lib-vz              22.7G   750M  22.7G  /var/lib/vz

Am I safe to `sudo rm -r rpool/data/vm-105-fleece-0` to all the fleece disks? Is there something else I should do to get rid of the drives?

I disabled the backup before I restarted, so I do not believe it will recreate on resart.

Thanks a mil,
Sam
 
Hi,

I think it should be save to remove the fleece files.
But there is no need for using sudo, as you are using a shell where you are already root.
And you are using zfs for storing the vm disks. This means, the vm disks are ZVOLs. So you cannot remove them with rm.
You need to use zfs destroy rpool/data/vm-105-fleece-0 to remove them.
I had a cancelled migration where I ended up with two disks which where not used and the only way to remove them, was to use zfs destroy.

Regards,
KH
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!