Orphan Fleecing Files Make Backups Fail

tcabernoch

Member
Apr 27, 2024
86
12
8
Portland, OR
This is a known issue. Fiona, one of the devs, confirmed as much.
If you run a PBS backup that uses 'fleecing', and the backup fails, it will leave behind a garbage cache file.

What I didn't realize was that not only is it making a mess, its also preventing subsequent backups from running.
It gets this far, sees a file name conflict, and just dies.
ERROR: zfs error: cannot create 'rpool/data/vm-310-fleece-1': dataset already exists

So keep that in mind. To the best of my understanding of this fancy new 'fleecing' feature
... if fleecing screws up, you need to clean it up manually, and backups are broken for that VM until you do.

Code:
INFO: Starting Backup of VM 310 (qemu)
INFO: Backup started at 2024-07-11 02:48:57
INFO: status = running
INFO: VM Name: phx-log-01
INFO: include disk 'virtio0' 'local-zfs:vm-310-disk-0' 32G
INFO: include disk 'virtio1' 'local-zfs:vm-310-disk-1' 100G
INFO: include disk 'virtio2' 'local-zfs:vm-310-disk-2' 8G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/pve/PROXMOX_BACKUPS-PHX-NAS-03/dump/vzdump-qemu-310-2024_07_11-02_48_57.vma.zst'
ERROR: zfs error: cannot create 'rpool/data/vm-310-fleece-1': dataset already exists
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 310 failed - zfs error: cannot create 'rpool/data/vm-310-fleece-1': dataset already exists
INFO: Failed at 2024-07-11 02:48:59
 
I had to bounce that VM in order to delete the fleecing file. The file was still locked and active.
I don't know if the backup would have still failed if the fleecing file was no longer locked. Would it overwrite the file and proceed with the backup?
Dunno.
 
Hi,
currently, if there is a hard failure where the backup task cannot clean up after itself (it does in normal error scenario), you'll need to remove the fleecing files manually. Can you please share the full task log of the previous failed backup (the one that led to the left-over fleecing image), so we can see what exactly went wrong there?
 
Thanks for your followup.

It appears in this case, this is NOT a PBS server. The backup job has fleecing enabled. I see fleecing in the log. But this is a PVE backup direct to NFS.

I'm rebuilding the PBS server at this site. Our temp solution was old-style PVE backups direct to NFS storage ... which then filled up.
Looks like the backup failed because the NAS filled up, and that caused it to leave the fleecing file (which is on local-zfs!) locked.
Regardless of that file being located on storage that was not locked, the VM needs to power cycle before the lock will clear, can't nuke the fleecing dataset till you do. And that caused subsequent backups to fail.

I have a few logs like this. Unfortunately, the log for the VM 310 originally failing is gone, but I'm pretty sure this is what happened.

Code:
## THIS IS PVE 8.2.2 WRITING TO A TRUENAS.

INFO: starting new backup job: vzdump 306 315 312 320 314 --notes-template '{{guestname}}, {{vmid}}' --mode snapshot --compress zstd --fleecing '1,storage=local-zfs' --storage <snip> --quiet 1
INFO: skip external VMs: 306, 314, 315, 320
INFO: Starting Backup of VM 312 (qemu)
INFO: Backup started at 2024-07-04 21:17:06
INFO: status = running
INFO: VM Name: <snip>
INFO: include disk 'virtio0' 'local-zfs:vm-312-disk-0' 256G
INFO: include disk 'virtio1' 'local-zfs:vm-312-disk-1' 768G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '<snip>'
INFO: drive-virtio0: attaching fleecing image local-zfs:vm-312-fleece-0 to QEMU
INFO: drive-virtio1: attaching fleecing image local-zfs:vm-312-fleece-1 to QEMU
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '24d86d98-85a5-44f7-93f4-14cc6e944b44'
INFO: resuming VM again
INFO:   0% (261.0 MiB of 1.0 TiB) in 3s, read: 87.0 MiB/s, write: 84.0 MiB/s
INFO:   1% (10.4 GiB of 1.0 TiB) in 53s, read: 207.0 MiB/s, write: 205.0 MiB/s
INFO:   2% (20.6 GiB of 1.0 TiB) in 1m 38s, read: 231.9 MiB/s, write: 230.6 MiB/s
INFO:   3% (30.7 GiB of 1.0 TiB) in 2m 30s, read: 200.6 MiB/s, write: 195.5 MiB/s
INFO:   4% (41.1 GiB of 1.0 TiB) in 3m 36s, read: 160.7 MiB/s, write: 158.0 MiB/s
INFO:   5% (51.3 GiB of 1.0 TiB) in 4m 43s, read: 156.3 MiB/s, write: 155.3 MiB/s
INFO:   6% (61.6 GiB of 1.0 TiB) in 5m 40s, read: 184.6 MiB/s, write: 160.6 MiB/s
INFO:   7% (71.7 GiB of 1.0 TiB) in 7m 16s, read: 107.8 MiB/s, write: 107.3 MiB/s
zstd: error 70 : Write error : cannot write block : No space left on device
INFO:   7% (79.6 GiB of 1.0 TiB) in 23m 29s, read: 8.3 MiB/s, write: 8.3 MiB/s
ERROR: vma_queue_write: write error - Broken pipe
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 312 failed - vma_queue_write: write error - Broken pipe
INFO: Failed at 2024-07-04 21:43:15
INFO: Backup job finished with errors
INFO: notified via target `mail-to-root`
TASK ERROR: job errors
 
Last edited:
Thanks for your followup.

It appears in this case, this is NOT a PBS server. The backup job has fleecing enabled. I see fleecing in the log. But this is a PVE backup direct to NFS.
Yes, fleecing is not exclusive to PBS.
I'm rebuilding the PBS server at this site. Our temp solution was old-style PVE backups direct to NFS storage ... which then filled up.
Looks like the backup failed because the NAS filled up, and that caused it to leave the fleecing file (which is on local-zfs!) locked.
Regardless of that file being located on storage that was not locked, the VM needs to power cycle before the lock will clear, can't nuke the fleecing dataset till you do. And that caused subsequent backups to fail.
I'll try to reproduce the issue. The cleanup should be done except if it was a hard failure.
I have a few logs like this. Unfortunately, the log for the VM 310 originally failing is gone, but I'm pretty sure this is what happened.
Did you check the Task History when selecting the node and filtering for Task Type vzdump?
Code:
## THIS IS PVE 8.2.2 WRITING TO A TRUENAS.

INFO: starting new backup job: vzdump 306 315 312 320 314 --notes-template '{{guestname}}, {{vmid}}' --mode snapshot --compress zstd --fleecing '1,storage=local-zfs' --storage <snip> --quiet 1
INFO: skip external VMs: 306, 314, 315, 320
INFO: Starting Backup of VM 312 (qemu)
INFO: Backup started at 2024-07-04 21:17:06
INFO: status = running
INFO: VM Name: <snip>
INFO: include disk 'virtio0' 'local-zfs:vm-312-disk-0' 256G
INFO: include disk 'virtio1' 'local-zfs:vm-312-disk-1' 768G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '<snip>'
INFO: drive-virtio0: attaching fleecing image local-zfs:vm-312-fleece-0 to QEMU
INFO: drive-virtio1: attaching fleecing image local-zfs:vm-312-fleece-1 to QEMU
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '24d86d98-85a5-44f7-93f4-14cc6e944b44'
INFO: resuming VM again
INFO:   0% (261.0 MiB of 1.0 TiB) in 3s, read: 87.0 MiB/s, write: 84.0 MiB/s
INFO:   1% (10.4 GiB of 1.0 TiB) in 53s, read: 207.0 MiB/s, write: 205.0 MiB/s
INFO:   2% (20.6 GiB of 1.0 TiB) in 1m 38s, read: 231.9 MiB/s, write: 230.6 MiB/s
INFO:   3% (30.7 GiB of 1.0 TiB) in 2m 30s, read: 200.6 MiB/s, write: 195.5 MiB/s
INFO:   4% (41.1 GiB of 1.0 TiB) in 3m 36s, read: 160.7 MiB/s, write: 158.0 MiB/s
INFO:   5% (51.3 GiB of 1.0 TiB) in 4m 43s, read: 156.3 MiB/s, write: 155.3 MiB/s
INFO:   6% (61.6 GiB of 1.0 TiB) in 5m 40s, read: 184.6 MiB/s, write: 160.6 MiB/s
INFO:   7% (71.7 GiB of 1.0 TiB) in 7m 16s, read: 107.8 MiB/s, write: 107.3 MiB/s
zstd: error 70 : Write error : cannot write block : No space left on device
INFO:   7% (79.6 GiB of 1.0 TiB) in 23m 29s, read: 8.3 MiB/s, write: 8.3 MiB/s
ERROR: vma_queue_write: write error - Broken pipe
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 312 failed - vma_queue_write: write error - Broken pipe
INFO: Failed at 2024-07-04 21:43:15
INFO: Backup job finished with errors
INFO: notified via target `mail-to-root`
TASK ERROR: job errors
The backup here did not fail hard and there are no errors about detaching/removing fleecing images, so I assume that worked fine. You can check with e.g. pvesm list local-zfs.

Works for me:
Code:
root@pve8a1:~# pvesm list zfs2
Volid Format  Type      Size VMID
root@pve8a1:~# vzdump 106 --storage nfs --fleecing 1,storage=zfs2
INFO: starting new backup job: vzdump 106 --fleecing '1,storage=zfs2' --storage nfs
INFO: Starting Backup of VM 106 (qemu)
INFO: Backup started at 2024-07-12 14:54:49
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: win11
INFO: include disk 'sata0' 'rbd:vm-106-disk-1' 52G
INFO: include disk 'efidisk0' 'rbd:vm-106-disk-4' 528K
INFO: include disk 'tpmstate0' 'rbd:vm-106-disk-5' 4M
INFO: creating vzdump archive '/mnt/pve/nfs/dump/vzdump-qemu-106-2024_07_12-14_54_49.vma'
INFO: starting kvm to execute backup task
/dev/rbd0
swtpm_setup: Not overwriting existing state file.
INFO: attaching TPM drive to QEMU for backup
INFO: drive-sata0: attaching fleecing image zfs2:vm-106-fleece-0 to QEMU
INFO: started backup task 'c054f0cb-1163-47f4-9e7c-2107c1fd3a67'
INFO:   9% (4.8 GiB of 52.0 GiB) in 3s, read: 1.6 GiB/s, write: 1.4 GiB/s
INFO:  16% (8.6 GiB of 52.0 GiB) in 6s, read: 1.3 GiB/s, write: 1.2 GiB/s
INFO:  31% (16.3 GiB of 52.0 GiB) in 9s, read: 2.6 GiB/s, write: 1.4 GiB/s
INFO:  40% (20.8 GiB of 52.0 GiB) in 13s, read: 1.1 GiB/s, write: 1.0 GiB/s
INFO:  44% (23.4 GiB of 52.0 GiB) in 15s, read: 1.3 GiB/s, write: 1.1 GiB/s
ERROR: vma_queue_write: write error - No space left on device
INFO: aborting backup job
INFO: stopping kvm after backup task
trying to acquire lock...
 OK
ERROR: Backup of VM 106 failed - vma_queue_write: write error - No space left on device
INFO: Failed at 2024-07-12 14:55:09
INFO: Backup job finished with errors
INFO: skipping disabled matcher 'default-matcher'
job errors
root@pve8a1:~# pvesm list zfs2
Volid Format  Type      Size VMID
While during backup:
Code:
root@pve8a1 ~ # pvesm list zfs2
Volid                Format  Type             Size VMID
zfs2:vm-106-fleece-0 raw     images    55834574848 106
 
Hey, thanks for the followup.

I looked at the bug tracker. Yes, that looks like the string of logs I saw, and "dataset is busy" was the error when i tried to zfs destroy the fleecing file. (It looks like the fix is going to increase some timeouts for disk activities.)

I'll try your advice about searching logs. I've not done much of that yet with Proxmox. I have had to delete several fleecing files, so there should be something there.
 
Last edited:
Found the original. This is just the relevant snip from the job log, other vms were processed as well. (I could email you the actual log, rather not post it here.)

INFO: Starting Backup of VM 310 (qemu)
INFO: Backup started at 2024-06-30 02:46:40
INFO: status = running
INFO: VM Name: <snip>
INFO: include disk 'virtio0' 'local-zfs:vm-310-disk-0' 32G
INFO: include disk 'virtio1' 'local-zfs:vm-310-disk-1' 100G
INFO: include disk 'virtio2' 'local-zfs:vm-310-disk-2' 8G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive<snip>
INFO: drive-virtio0: attaching fleecing image local-zfs:vm-310-fleece-0 to QEMU
INFO: drive-virtio1: attaching fleecing image local-zfs:vm-310-fleece-1 to QEMU
INFO: drive-virtio2: attaching fleecing image local-zfs:vm-310-fleece-2 to QEMU
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'a0198d7f-a482-4a61-8bdf-b2af44abbb64'
INFO: resuming VM again
INFO: 5% (8.3 GiB of 140.0 GiB) in 3s, read: 2.8 GiB/s, write: 73.5 MiB/s
INFO: 6% (8.7 GiB of 140.0 GiB) in 6s, read: 130.3 MiB/s, write: 126.8 MiB/s
INFO: 7% (9.9 GiB of 140.0 GiB) in 19s, read: 92.3 MiB/s, write: 91.6 MiB/s
INFO: 8% (11.3 GiB of 140.0 GiB) in 37s, read: 80.7 MiB/s, write: 73.9 MiB/s
INFO: 9% (12.6 GiB of 140.0 GiB) in 56s, read: 71.8 MiB/s, write: 65.2 MiB/s
INFO: 10% (14.1 GiB of 140.0 GiB) in 1m 13s, read: 91.9 MiB/s, write: 85.6 MiB/s
INFO: 11% (15.4 GiB of 140.0 GiB) in 1m 28s, read: 86.3 MiB/s, write: 86.2 MiB/s
INFO: 12% (16.8 GiB of 140.0 GiB) in 1m 46s, read: 79.5 MiB/s, write: 73.1 MiB/s
INFO: 13% (18.2 GiB of 140.0 GiB) in 2m 2s, read: 91.8 MiB/s, write: 83.9 MiB/s
INFO: 14% (19.6 GiB of 140.0 GiB) in 2m 22s, read: 71.4 MiB/s, write: 71.2 MiB/s
INFO: 15% (21.0 GiB of 140.0 GiB) in 2m 41s, read: 74.7 MiB/s, write: 68.7 MiB/s
INFO: 16% (22.5 GiB of 140.0 GiB) in 2m 58s, read: 86.4 MiB/s, write: 79.4 MiB/s
INFO: 17% (23.8 GiB of 140.0 GiB) in 3m 16s, read: 78.8 MiB/s, write: 78.6 MiB/s
INFO: 18% (25.2 GiB of 140.0 GiB) in 3m 30s, read: 100.6 MiB/s, write: 91.8 MiB/s
INFO: 19% (26.7 GiB of 140.0 GiB) in 3m 47s, read: 87.1 MiB/s, write: 80.7 MiB/s
INFO: 20% (28.1 GiB of 140.0 GiB) in 4m 1s, read: 107.2 MiB/s, write: 100.2 MiB/s
INFO: 21% (29.5 GiB of 140.0 GiB) in 4m 15s, read: 99.2 MiB/s, write: 98.8 MiB/s
INFO: 22% (30.8 GiB of 140.0 GiB) in 4m 42s, read: 50.0 MiB/s, write: 45.6 MiB/s
INFO: 23% (32.3 GiB of 140.0 GiB) in 5m 2s, read: 76.2 MiB/s, write: 70.8 MiB/s
INFO: 24% (33.6 GiB of 140.0 GiB) in 5m 18s, read: 85.0 MiB/s, write: 84.9 MiB/s
INFO: 25% (35.1 GiB of 140.0 GiB) in 5m 36s, read: 82.4 MiB/s, write: 76.0 MiB/s
INFO: 26% (36.4 GiB of 140.0 GiB) in 5m 52s, read: 86.8 MiB/s, write: 79.9 MiB/s
INFO: 27% (37.8 GiB of 140.0 GiB) in 6m 10s, read: 78.7 MiB/s, write: 78.7 MiB/s
INFO: 28% (39.2 GiB of 140.0 GiB) in 6m 27s, read: 84.2 MiB/s, write: 77.2 MiB/s
INFO: 29% (40.6 GiB of 140.0 GiB) in 6m 57s, read: 48.8 MiB/s, write: 47.7 MiB/s
INFO: 30% (42.1 GiB of 140.0 GiB) in 7m 20s, read: 66.6 MiB/s, write: 61.3 MiB/s
INFO: 31% (43.5 GiB of 140.0 GiB) in 7m 41s, read: 66.9 MiB/s, write: 66.5 MiB/s
INFO: 32% (44.9 GiB of 140.0 GiB) in 7m 57s, read: 87.5 MiB/s, write: 79.9 MiB/s
INFO: 33% (46.3 GiB of 140.0 GiB) in 8m 14s, read: 84.7 MiB/s, write: 77.8 MiB/s
INFO: 34% (47.6 GiB of 140.0 GiB) in 8m 32s, read: 77.7 MiB/s, write: 77.7 MiB/s
INFO: 35% (49.0 GiB of 140.0 GiB) in 8m 50s, read: 77.7 MiB/s, write: 72.5 MiB/s
INFO: 36% (50.4 GiB of 140.0 GiB) in 9m 9s, read: 77.6 MiB/s, write: 71.2 MiB/s
INFO: 37% (51.8 GiB of 140.0 GiB) in 9m 29s, read: 71.6 MiB/s, write: 71.2 MiB/s
INFO: 38% (53.2 GiB of 140.0 GiB) in 9m 47s, read: 78.9 MiB/s, write: 71.9 MiB/s
INFO: 39% (54.6 GiB of 140.0 GiB) in 10m 4s, read: 83.2 MiB/s, write: 77.3 MiB/s
INFO: 40% (56.0 GiB of 140.0 GiB) in 10m 21s, read: 83.8 MiB/s, write: 81.3 MiB/s
INFO: 41% (57.5 GiB of 140.0 GiB) in 10m 41s, read: 74.2 MiB/s, write: 63.2 MiB/s
INFO: 42% (58.8 GiB of 140.0 GiB) in 11m 3s, read: 63.5 MiB/s, write: 50.6 MiB/s
INFO: 43% (60.2 GiB of 140.0 GiB) in 11m 29s, read: 54.9 MiB/s, write: 50.9 MiB/s
INFO: 44% (61.7 GiB of 140.0 GiB) in 11m 54s, read: 59.7 MiB/s, write: 59.6 MiB/s
INFO: 45% (63.1 GiB of 140.0 GiB) in 12m 10s, read: 89.5 MiB/s, write: 82.0 MiB/s
INFO: 46% (64.4 GiB of 140.0 GiB) in 12m 26s, read: 88.6 MiB/s, write: 81.1 MiB/s
INFO: 47% (65.8 GiB of 140.0 GiB) in 13m, read: 41.6 MiB/s, write: 41.6 MiB/s
INFO: 48% (67.2 GiB of 140.0 GiB) in 13m 18s, read: 78.3 MiB/s, write: 71.5 MiB/s
INFO: 49% (68.7 GiB of 140.0 GiB) in 13m 43s, read: 59.9 MiB/s, write: 55.0 MiB/s
INFO: 50% (70.1 GiB of 140.0 GiB) in 14m 2s, read: 77.6 MiB/s, write: 70.8 MiB/s
INFO: 51% (71.4 GiB of 140.0 GiB) in 14m 27s, read: 53.9 MiB/s, write: 53.5 MiB/s
INFO: 52% (72.9 GiB of 140.0 GiB) in 14m 46s, read: 77.2 MiB/s, write: 70.9 MiB/s
INFO: 53% (74.2 GiB of 140.0 GiB) in 15m 9s, read: 60.1 MiB/s, write: 56.3 MiB/s
INFO: 54% (75.6 GiB of 140.0 GiB) in 15m 27s, read: 79.5 MiB/s, write: 79.3 MiB/s
INFO: 55% (77.1 GiB of 140.0 GiB) in 15m 43s, read: 92.7 MiB/s, write: 84.8 MiB/s
INFO: 56% (78.5 GiB of 140.0 GiB) in 15m 59s, read: 90.4 MiB/s, write: 82.5 MiB/s
INFO: 57% (79.9 GiB of 140.0 GiB) in 16m 16s, read: 84.2 MiB/s, write: 83.9 MiB/s
INFO: 58% (81.2 GiB of 140.0 GiB) in 16m 34s, read: 78.4 MiB/s, write: 71.8 MiB/s
INFO: 59% (82.6 GiB of 140.0 GiB) in 17m 7s, read: 43.3 MiB/s, write: 39.5 MiB/s
INFO: 60% (84.2 GiB of 140.0 GiB) in 17m 31s, read: 64.9 MiB/s, write: 59.7 MiB/s
INFO: 61% (85.4 GiB of 140.0 GiB) in 17m 46s, read: 85.3 MiB/s, write: 85.3 MiB/s
INFO: 62% (86.8 GiB of 140.0 GiB) in 18m 5s, read: 75.2 MiB/s, write: 68.6 MiB/s
INFO: 63% (88.2 GiB of 140.0 GiB) in 18m 58s, read: 27.2 MiB/s, write: 24.9 MiB/s
INFO: 64% (89.6 GiB of 140.0 GiB) in 19m 15s, read: 85.7 MiB/s, write: 85.5 MiB/s
INFO: 65% (91.0 GiB of 140.0 GiB) in 19m 39s, read: 59.1 MiB/s, write: 54.1 MiB/s
INFO: 66% (92.4 GiB of 140.0 GiB) in 19m 59s, read: 71.2 MiB/s, write: 65.2 MiB/s
INFO: 67% (93.8 GiB of 140.0 GiB) in 20m 23s, read: 60.3 MiB/s, write: 60.3 MiB/s
INFO: 68% (95.2 GiB of 140.0 GiB) in 20m 46s, read: 62.0 MiB/s, write: 56.8 MiB/s
INFO: 69% (96.7 GiB of 140.0 GiB) in 21m 6s, read: 74.4 MiB/s, write: 68.3 MiB/s
INFO: 70% (98.1 GiB of 140.0 GiB) in 21m 33s, read: 55.3 MiB/s, write: 50.5 MiB/s
INFO: 71% (99.4 GiB of 140.0 GiB) in 22m 2s, read: 45.8 MiB/s, write: 45.4 MiB/s
INFO: 72% (100.8 GiB of 140.0 GiB) in 22m 34s, read: 44.4 MiB/s, write: 40.5 MiB/s
INFO: 73% (102.2 GiB of 140.0 GiB) in 23m 2s, read: 51.1 MiB/s, write: 46.5 MiB/s
INFO: 74% (103.6 GiB of 140.0 GiB) in 23m 36s, read: 43.4 MiB/s, write: 43.1 MiB/s
INFO: 75% (105.0 GiB of 140.0 GiB) in 24m 5s, read: 48.1 MiB/s, write: 44.1 MiB/s
INFO: 76% (106.7 GiB of 140.0 GiB) in 24m 35s, read: 58.9 MiB/s, write: 41.4 MiB/s
INFO: 77% (107.8 GiB of 140.0 GiB) in 24m 50s, read: 74.9 MiB/s, write: 71.1 MiB/s
INFO: 78% (110.1 GiB of 140.0 GiB) in 25m 34s, read: 53.5 MiB/s, write: 28.8 MiB/s
INFO: 79% (110.6 GiB of 140.0 GiB) in 25m 37s, read: 167.4 MiB/s, write: 159.0 MiB/s
INFO: 80% (112.2 GiB of 140.0 GiB) in 25m 48s, read: 142.5 MiB/s, write: 119.4 MiB/s
INFO: 81% (113.4 GiB of 140.0 GiB) in 26m 1s, read: 99.8 MiB/s, write: 83.9 MiB/s
INFO: 82% (114.8 GiB of 140.0 GiB) in 26m 9s, read: 180.0 MiB/s, write: 142.5 MiB/s
INFO: 83% (116.2 GiB of 140.0 GiB) in 26m 19s, read: 143.2 MiB/s, write: 124.8 MiB/s
INFO: 84% (118.3 GiB of 140.0 GiB) in 26m 29s, read: 209.5 MiB/s, write: 119.1 MiB/s
INFO: 87% (122.0 GiB of 140.0 GiB) in 26m 32s, read: 1.2 GiB/s, write: 93.1 MiB/s
INFO: 89% (126.0 GiB of 140.0 GiB) in 26m 35s, read: 1.3 GiB/s, write: 125.9 MiB/s
INFO: 90% (126.6 GiB of 140.0 GiB) in 26m 38s, read: 224.8 MiB/s, write: 133.5 MiB/s
INFO: 91% (127.8 GiB of 140.0 GiB) in 26m 43s, read: 234.5 MiB/s, write: 117.0 MiB/s
INFO: 92% (128.9 GiB of 140.0 GiB) in 26m 49s, read: 190.6 MiB/s, write: 126.2 MiB/s
INFO: 93% (131.3 GiB of 140.0 GiB) in 26m 56s, read: 361.4 MiB/s, write: 144.5 MiB/s
INFO: 94% (132.2 GiB of 140.0 GiB) in 26m 59s, read: 291.5 MiB/s, write: 44.9 MiB/s
INFO: 95% (133.7 GiB of 140.0 GiB) in 27m 4s, read: 312.1 MiB/s, write: 112.3 MiB/s
INFO: 96% (134.5 GiB of 140.0 GiB) in 27m 7s, read: 271.9 MiB/s, write: 142.3 MiB/s
INFO: 97% (135.8 GiB of 140.0 GiB) in 27m 10s, read: 436.0 MiB/s, write: 121.1 MiB/s
INFO: 98% (138.1 GiB of 140.0 GiB) in 27m 15s, read: 461.6 MiB/s, write: 144.2 MiB/s
INFO: 99% (138.7 GiB of 140.0 GiB) in 27m 20s, read: 131.2 MiB/s, write: 118.0 MiB/s
INFO: 100% (140.0 GiB of 140.0 GiB) in 27m 30s, read: 133.6 MiB/s, write: 125.4 MiB/s
INFO: backup is sparse: 31.84 GiB (22%) total zero data
INFO: transferred 140.00 GiB in 1650 seconds (86.9 MiB/s)
WARN: error removing fleecing image 'local-zfs:vm-310-fleece-1' - zfs error: cannot destroy 'rpool/data/vm-310-fleece-1': dataset is busy
INFO: archive file size: 50.28GB
INFO: adding notes to backup
INFO: Finished Backup of VM 310 (00:27:58)
INFO: Backup finished at 2024-06-30 03:14:38
 
Last edited:
WARN: error removing fleecing image 'local-zfs:vm-310-fleece-1' - zfs error: cannot destroy 'rpool/data/vm-310-fleece-1': dataset is busy
Yes, it's pretty likely to be the same issue as in https://bugzilla.proxmox.com/show_bug.cgi?id=5440
If the disk did not finish detaching from QEMU yet (i.e. the detach code doesn't wait long enough and the cleanup routine will continue), ZFS will still consider it busy.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!