Orphaned Fleecing Files

tcabernoch

Active Member
Apr 27, 2024
253
52
28
Portland, OR
www.gnetsys.net
I've no idea how this happened.
The VM that owns these files is on another host now. Used to be on this one.

These files are probably sparse, and don't really contain that much wasted space, but this just seems bad in general.

Does anybody else have junk like this laying around? How is it happening? What's a good method for cleaning it up?

1717284309344.png
 

Attachments

  • 1717284195114.png
    1717284195114.png
    92.6 KB · Views: 6
I had some of them because my PVE rebooted during a backup due to bad memory. An interupted/killed backup maybe?
As for cleaning them up, in your case you can simply select them one by one and use the Remove button. It should work, as the VM is no longer on the node.
Alternativly use the shell and use zfs destroy /path/to/vm-223-fleece-0
 
  • Like
Reactions: tcabernoch
Ok. We can test that. I'll interrupt some fleeced backups, and see what happens.
In even the most stable systems, backup jobs do occasionally get killed to meet the demands of the moment.
If killing a job is going to leave a mess to clean up, then that's something to be aware of and plan for.
I'll test and report back.
 
Fleecing zfs dataset stucked from a backup job error canno delete, wtf now i need to stop the VM and restart it. Please, correct the bug in fleecing, at the moment i disabled it in backup.
 
Ya, its weird because the GUI won't let you delete a file "belonging" to a VM present on that host. Apparently it figures out ownership by file name. Really clunky.

Here, fix it like this. You gotta CLI this.

First, "zfs list" and look for the names of the fleecing files.

zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 1010G 2.01T 205K /rpool
rpool/ROOT 4.79G 2.01T 205K /rpool/ROOT
rpool/ROOT/pve-1 4.79G 2.01T 4.79G /
rpool/data 1000G 2.01T 205K /rpool/data
rpool/data/vm-203-disk-0 496G 2.01T 496G -
rpool/data/vm-203-disk-1 18.1G 2.01T 18.1G -
rpool/data/vm-212-disk-0 12.2G 2.01T 12.2G -
rpool/data/vm-212-fleece-0 119K 2.01T 119K -
rpool/data/vm-213-disk-0 78.3G 2.01T 78.3G -
rpool/data/vm-213-fleece-0 119K 2.01T 119K -
rpool/data/vm-217-disk-0 239K 2.01T 239K -
rpool/data/vm-217-disk-1 22.6G 2.01T 22.6G -
rpool/data/vm-221-disk-0 239G 2.01T 239G -
rpool/data/vm-233-disk-0 67.5G 2.01T 67.5G -
rpool/data/vm-233-disk-1 11.2G 2.01T 11.2G -
rpool/data/vm-233-disk-2 724M 2.01T 724M -
rpool/data/vm-233-disk-3 81.4M 2.01T 81.4M -
rpool/data/vm-233-fleece-0 119K 2.01T 119K -
rpool/data/vm-233-fleece-1 119K 2.01T 119K -
rpool/data/vm-233-fleece-2 119K 2.01T 119K -
rpool/data/vm-233-fleece-3 119K 2.01T 119K -

rpool/data/vm-234-disk-0 55.5G 2.01T 55.5G -
rpool/data/vm-234-fleece-0 119K 2.01T 119K -
rpool/var-lib-vz 4.16G 2.01T 4.16G /var/lib/vz


Delete them with zfs destroy and the full path/name.

[<snip>: ~]# zfs destroy rpool/data/vm-212-fleece-0
[<snip>: ~]# zfs destroy rpool/data/vm-213-fleece-0
[<snip>: ~]# zfs destroy rpool/data/vm-233-fleece-0
[<snip>: ~]# zfs destroy rpool/data/vm-234-fleece-0


Some of them you can't delete.
If you get the "dataset busy" error, you have to shut down the VM first, and then delete them.

[<snip>: ~]# zfs destroy rpool/data/vm-233-fleece-1
cannot destroy 'rpool/data/vm-233-fleece-1': dataset is busy
[<snip>: ~]# zfs destroy rpool/data/vm-233-fleece-2
cannot destroy 'rpool/data/vm-233-fleece-2': dataset is busy
[<snip>: ~]# zfs destroy rpool/data/vm-233-fleece-3
cannot destroy 'rpool/data/vm-233-fleece-3': dataset is busy
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!