Backup Error - Fleece dataset already exists

Alva

Member
Mar 31, 2022
15
3
8
Hello,

I turned on fleecing and I'm seeing backups fail with the following error.

Code:
ERROR: zfs error: cannot create 'rpool/data/vm-10000-fleece-0': dataset already exists
...
ERROR: Backup of VM 10000 failed - zfs error: cannot create 'rpool/data/vm-10000-fleece-0': dataset already exists

Is there a way to reset/delete the fleece dataset?

Thanks.
 
Last edited:
When I have encountered this error in the past, I check and delete any snapshots that may exist for that dataset and then ' zfs destroy ' it; retry the backup after that
 
When I have encountered this error in the past, I check and delete any snapshots that may exist for that dataset and then ' zfs destroy ' it; retry the backup after that
Thanks. Could you possibly give a bit more info on how I would do this? Where can I find where the fleece dataset is located? Do you have an example `zfs destroy` command? I've never done it before.
 
Thanks. Could you possibly give a bit more info on how I would do this? Where can I find where the fleece dataset is located? Do you have an example `zfs destroy` command? I've never done it before.

You literally posted it in your original message:
rpool/data/vm-10000-fleece-0

Sorry, you're going to have to do some basic searching. I'm willing to help, but I'm between ${dayjob}s at the moment and don't really have the inclination to handhold.

Start with ' man zfs-destroy ' and do some research, you need to know the basics of using ZFS if you're going to leverage the benefits and avoid pitfalls.
 
  • Like
Reactions: Alva
You literally posted it in your original message:
rpool/data/vm-10000-fleece-0

Sorry, you're going to have to do some basic searching. I'm willing to help, but I'm between ${dayjob}s at the moment and don't really have the inclination to handhold.

Start with ' man zfs-destroy ' and do some research, you need to know the basics of using ZFS if you're going to leverage the benefits and avoid pitfalls.
Thanks. I understand and I appreciate the additional info!
 
Dear future reader, here's what I did to solve this.

List all the datasets on the node:
Code:
zfs list

Delete the offending dataset:
Code:
zfs destroy -f [disk_path]

E.g.
Code:
zfs destroy -f rpool/data/vm-10000-fleece-0
 
  • Like
Reactions: Kingneutron
Happy you got it solved. (Dataset removal).

What was the underlying reason for the failed fleeced backup. Snapshots?
I've only seen it occur when a backup doesn't complete successfully. No completion = no cleanup.

It seems like this might be a possible oversight/bug.
 
Hi,
I've only seen it occur when a backup doesn't complete successfully. No completion = no cleanup.

It seems like this might be a possible oversight/bug.
could you share the log of the failed backup task? Cleanup is done on failure too, but depending on what exactly fails, the code for cleanup might not even be reached or there might be another error during cleanup itself.
 
Hi,

could you share the log of the failed backup task? Cleanup is done on failure too, but depending on what exactly fails, the code for cleanup might not even be reached or there might be another error during cleanup itself.

You're in luck ;), it happened again this morning:

Code:
INFO: Backup started at 2024-05-02 07:04:30
INFO: status = running
INFO: VM Name: test01
INFO: include disk 'scsi0' 'data:vm-10000-disk-1' 256G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/pve/storage/dump/vzdump-qemu-10000-2024_05_02-07_04_30.vma.zst'
INFO: drive-scsi0: attaching fleecing image local-zfs:vm-10000-fleece-0 to QEMU
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'd90e3927-78e9-48d2-b364-b4fdb69e2671'
INFO: resuming VM again
INFO:   0% (413.2 MiB of 256.0 GiB) in 3s, read: 137.8 MiB/s, write: 59.3 MiB/s
INFO:   1% (2.6 GiB of 256.0 GiB) in 44s, read: 54.5 MiB/s, write: 45.2 MiB/s
INFO:   2% (5.1 GiB of 256.0 GiB) in 1m 36s, read: 50.4 MiB/s, write: 48.5 MiB/s
INFO:   3% (8.1 GiB of 256.0 GiB) in 1m 42s, read: 506.3 MiB/s, write: 54.4 MiB/s
INFO:   4% (10.3 GiB of 256.0 GiB) in 2m 14s, read: 69.2 MiB/s, write: 43.3 MiB/s
INFO:   5% (12.8 GiB of 256.0 GiB) in 3m 21s, read: 38.9 MiB/s, write: 37.4 MiB/s
INFO:   6% (15.4 GiB of 256.0 GiB) in 4m 27s, read: 39.5 MiB/s, write: 38.3 MiB/s
INFO:   7% (18.0 GiB of 256.0 GiB) in 5m 22s, read: 48.6 MiB/s, write: 45.9 MiB/s
INFO:   8% (20.5 GiB of 256.0 GiB) in 6m 6s, read: 59.4 MiB/s, write: 53.1 MiB/s
INFO:   9% (23.1 GiB of 256.0 GiB) in 6m 59s, read: 49.4 MiB/s, write: 46.7 MiB/s
INFO:  10% (25.7 GiB of 256.0 GiB) in 7m 24s, read: 107.6 MiB/s, write: 58.8 MiB/s
INFO:  11% (28.2 GiB of 256.0 GiB) in 8m 10s, read: 56.2 MiB/s, write: 46.8 MiB/s
INFO:  12% (30.7 GiB of 256.0 GiB) in 9m 2s, read: 48.9 MiB/s, write: 44.6 MiB/s
INFO:  13% (33.3 GiB of 256.0 GiB) in 9m 43s, read: 65.2 MiB/s, write: 63.8 MiB/s
INFO:  14% (37.1 GiB of 256.0 GiB) in 10m 6s, read: 169.9 MiB/s, write: 56.9 MiB/s
INFO:  18% (46.7 GiB of 256.0 GiB) in 10m 9s, read: 3.2 GiB/s, write: 1.5 MiB/s
INFO:  19% (48.7 GiB of 256.0 GiB) in 10m 26s, read: 120.1 MiB/s, write: 51.9 MiB/s
INFO:  20% (51.3 GiB of 256.0 GiB) in 11m 14s, read: 56.1 MiB/s, write: 55.4 MiB/s
INFO:  21% (55.8 GiB of 256.0 GiB) in 11m 25s, read: 422.5 MiB/s, write: 65.4 MiB/s
INFO:  23% (60.1 GiB of 256.0 GiB) in 11m 28s, read: 1.4 GiB/s, write: 44.6 MiB/s
INFO:  24% (61.6 GiB of 256.0 GiB) in 11m 44s, read: 96.5 MiB/s, write: 54.1 MiB/s
INFO:  25% (64.0 GiB of 256.0 GiB) in 12m 10s, read: 96.2 MiB/s, write: 63.3 MiB/s
INFO:  26% (66.6 GiB of 256.0 GiB) in 13m, read: 52.9 MiB/s, write: 50.7 MiB/s
INFO:  27% (71.5 GiB of 256.0 GiB) in 13m 43s, read: 116.3 MiB/s, write: 47.6 MiB/s
INFO:  30% (78.3 GiB of 256.0 GiB) in 13m 46s, read: 2.3 GiB/s, write: 20.9 MiB/s
INFO:  31% (80.0 GiB of 256.0 GiB) in 13m 49s, read: 587.2 MiB/s, write: 35.0 MiB/s
INFO:  32% (81.9 GiB of 256.0 GiB) in 14m 20s, read: 64.3 MiB/s, write: 58.2 MiB/s
INFO:  33% (85.9 GiB of 256.0 GiB) in 14m 40s, read: 203.1 MiB/s, write: 58.9 MiB/s
INFO:  36% (92.3 GiB of 256.0 GiB) in 14m 43s, read: 2.1 GiB/s, write: 15.1 MiB/s
INFO:  37% (94.7 GiB of 256.0 GiB) in 15m 25s, read: 59.4 MiB/s, write: 55.4 MiB/s
INFO:  38% (97.3 GiB of 256.0 GiB) in 16m 15s, read: 52.4 MiB/s, write: 50.3 MiB/s
INFO:  39% (99.9 GiB of 256.0 GiB) in 17m 5s, read: 52.9 MiB/s, write: 50.5 MiB/s
INFO:  40% (102.4 GiB of 256.0 GiB) in 18m 2s, read: 45.7 MiB/s, write: 43.3 MiB/s
INFO:  41% (105.0 GiB of 256.0 GiB) in 19m 9s, read: 38.8 MiB/s, write: 36.4 MiB/s
INFO:  42% (107.5 GiB of 256.0 GiB) in 20m 8s, read: 44.7 MiB/s, write: 42.0 MiB/s
INFO:  43% (110.1 GiB of 256.0 GiB) in 21m 19s, read: 37.2 MiB/s, write: 35.7 MiB/s
INFO:  44% (112.6 GiB of 256.0 GiB) in 22m 23s, read: 40.4 MiB/s, write: 38.8 MiB/s
INFO:  45% (116.4 GiB of 256.0 GiB) in 23m 14s, read: 74.8 MiB/s, write: 46.9 MiB/s
INFO:  46% (118.4 GiB of 256.0 GiB) in 23m 39s, read: 84.1 MiB/s, write: 41.0 MiB/s
INFO:  49% (127.7 GiB of 256.0 GiB) in 23m 42s, read: 3.1 GiB/s, write: 5.3 KiB/s
INFO:  50% (128.5 GiB of 256.0 GiB) in 23m 45s, read: 276.0 MiB/s, write: 46.7 MiB/s
INFO:  51% (130.6 GiB of 256.0 GiB) in 24m 5s, read: 106.9 MiB/s, write: 68.4 MiB/s
INFO:  52% (133.4 GiB of 256.0 GiB) in 24m 15s, read: 294.2 MiB/s, write: 44.3 MiB/s
INFO:  56% (145.6 GiB of 256.0 GiB) in 24m 18s, read: 4.0 GiB/s, write: 9.3 KiB/s
INFO:  57% (146.4 GiB of 256.0 GiB) in 24m 21s, read: 272.5 MiB/s, write: 90.9 MiB/s
INFO:  58% (150.5 GiB of 256.0 GiB) in 24m 32s, read: 382.8 MiB/s, write: 38.9 MiB/s
INFO:  63% (162.0 GiB of 256.0 GiB) in 24m 35s, read: 3.8 GiB/s, write: 13.3 KiB/s
INFO:  64% (165.9 GiB of 256.0 GiB) in 24m 48s, read: 309.3 MiB/s, write: 42.7 MiB/s
INFO:  69% (177.4 GiB of 256.0 GiB) in 24m 51s, read: 3.8 GiB/s, write: 9.3 KiB/s
INFO:  72% (186.2 GiB of 256.0 GiB) in 24m 54s, read: 2.9 GiB/s, write: 21.6 MiB/s
INFO:  75% (194.2 GiB of 256.0 GiB) in 24m 57s, read: 2.7 GiB/s, write: 39.2 MiB/s
INFO:  79% (204.6 GiB of 256.0 GiB) in 25m, read: 3.5 GiB/s, write: 6.8 MiB/s
INFO:  82% (212.2 GiB of 256.0 GiB) in 25m 3s, read: 2.5 GiB/s, write: 21.6 MiB/s
INFO:  87% (223.2 GiB of 256.0 GiB) in 25m 6s, read: 3.7 GiB/s, write: 12.0 KiB/s
INFO:  88% (226.2 GiB of 256.0 GiB) in 25m 9s, read: 1011.7 MiB/s, write: 35.3 MiB/s
INFO:  89% (228.3 GiB of 256.0 GiB) in 25m 30s, read: 102.2 MiB/s, write: 43.0 MiB/s
INFO:  93% (239.4 GiB of 256.0 GiB) in 25m 33s, read: 3.7 GiB/s, write: 36.0 KiB/s
INFO:  94% (242.1 GiB of 256.0 GiB) in 25m 36s, read: 930.4 MiB/s, write: 61.2 MiB/s
INFO:  95% (243.2 GiB of 256.0 GiB) in 25m 51s, read: 72.8 MiB/s, write: 65.4 MiB/s
INFO:  96% (246.4 GiB of 256.0 GiB) in 26m 21s, read: 109.7 MiB/s, write: 44.8 MiB/s
INFO:  99% (255.9 GiB of 256.0 GiB) in 26m 24s, read: 3.1 GiB/s, write: 6.0 MiB/s
INFO: 100% (256.0 GiB of 256.0 GiB) in 26m 28s, read: 36.0 MiB/s, write: 34.6 MiB/s
INFO: backup is sparse: 184.21 GiB (71%) total zero data
INFO: transferred 256.00 GiB in 1588 seconds (165.1 MiB/s)
WARN: error removing fleecing image 'local-zfs:vm-10000-fleece-0' - zfs error: cannot destroy 'rpool/data/vm-10000-fleece-0': dataset is busy
INFO: archive file size: 23.67GB
INFO: adding notes to backup
INFO: prune older backups with retention: keep-daily=3, keep-monthly=3, keep-weekly=3
INFO: removing backup 'storage:backup/vzdump-qemu-10000-2024_04_25-07_01_33.vma.zst'
INFO: pruned 1 backup(s) not covered by keep-retention policy
INFO: Finished Backup of VM 10000 (00:26:45)
INFO: Backup finished at 2024-05-02 07:31:15

I'm not sure why the fleece dataset would still be busy after the backup completed. Any help with avoiding this in future would be greatly appreciated!

Thanks.
 
Last edited:
You're in luck ;), it happened again this morning:

Code:
INFO: Backup started at 2024-05-02 07:04:30
INFO: status = running
INFO: VM Name: test01
INFO: include disk 'scsi0' 'data:vm-10000-disk-1' 256G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/pve/storage/dump/vzdump-qemu-10000-2024_05_02-07_04_30.vma.zst'
INFO: drive-scsi0: attaching fleecing image local-zfs:vm-10000-fleece-0 to QEMU
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'd90e3927-78e9-48d2-b364-b4fdb69e2671'
INFO: resuming VM again
INFO:   0% (413.2 MiB of 256.0 GiB) in 3s, read: 137.8 MiB/s, write: 59.3 MiB/s
INFO:   1% (2.6 GiB of 256.0 GiB) in 44s, read: 54.5 MiB/s, write: 45.2 MiB/s
INFO:   2% (5.1 GiB of 256.0 GiB) in 1m 36s, read: 50.4 MiB/s, write: 48.5 MiB/s
INFO:   3% (8.1 GiB of 256.0 GiB) in 1m 42s, read: 506.3 MiB/s, write: 54.4 MiB/s
INFO:   4% (10.3 GiB of 256.0 GiB) in 2m 14s, read: 69.2 MiB/s, write: 43.3 MiB/s
INFO:   5% (12.8 GiB of 256.0 GiB) in 3m 21s, read: 38.9 MiB/s, write: 37.4 MiB/s
INFO:   6% (15.4 GiB of 256.0 GiB) in 4m 27s, read: 39.5 MiB/s, write: 38.3 MiB/s
INFO:   7% (18.0 GiB of 256.0 GiB) in 5m 22s, read: 48.6 MiB/s, write: 45.9 MiB/s
INFO:   8% (20.5 GiB of 256.0 GiB) in 6m 6s, read: 59.4 MiB/s, write: 53.1 MiB/s
INFO:   9% (23.1 GiB of 256.0 GiB) in 6m 59s, read: 49.4 MiB/s, write: 46.7 MiB/s
INFO:  10% (25.7 GiB of 256.0 GiB) in 7m 24s, read: 107.6 MiB/s, write: 58.8 MiB/s
INFO:  11% (28.2 GiB of 256.0 GiB) in 8m 10s, read: 56.2 MiB/s, write: 46.8 MiB/s
INFO:  12% (30.7 GiB of 256.0 GiB) in 9m 2s, read: 48.9 MiB/s, write: 44.6 MiB/s
INFO:  13% (33.3 GiB of 256.0 GiB) in 9m 43s, read: 65.2 MiB/s, write: 63.8 MiB/s
INFO:  14% (37.1 GiB of 256.0 GiB) in 10m 6s, read: 169.9 MiB/s, write: 56.9 MiB/s
INFO:  18% (46.7 GiB of 256.0 GiB) in 10m 9s, read: 3.2 GiB/s, write: 1.5 MiB/s
INFO:  19% (48.7 GiB of 256.0 GiB) in 10m 26s, read: 120.1 MiB/s, write: 51.9 MiB/s
INFO:  20% (51.3 GiB of 256.0 GiB) in 11m 14s, read: 56.1 MiB/s, write: 55.4 MiB/s
INFO:  21% (55.8 GiB of 256.0 GiB) in 11m 25s, read: 422.5 MiB/s, write: 65.4 MiB/s
INFO:  23% (60.1 GiB of 256.0 GiB) in 11m 28s, read: 1.4 GiB/s, write: 44.6 MiB/s
INFO:  24% (61.6 GiB of 256.0 GiB) in 11m 44s, read: 96.5 MiB/s, write: 54.1 MiB/s
INFO:  25% (64.0 GiB of 256.0 GiB) in 12m 10s, read: 96.2 MiB/s, write: 63.3 MiB/s
INFO:  26% (66.6 GiB of 256.0 GiB) in 13m, read: 52.9 MiB/s, write: 50.7 MiB/s
INFO:  27% (71.5 GiB of 256.0 GiB) in 13m 43s, read: 116.3 MiB/s, write: 47.6 MiB/s
INFO:  30% (78.3 GiB of 256.0 GiB) in 13m 46s, read: 2.3 GiB/s, write: 20.9 MiB/s
INFO:  31% (80.0 GiB of 256.0 GiB) in 13m 49s, read: 587.2 MiB/s, write: 35.0 MiB/s
INFO:  32% (81.9 GiB of 256.0 GiB) in 14m 20s, read: 64.3 MiB/s, write: 58.2 MiB/s
INFO:  33% (85.9 GiB of 256.0 GiB) in 14m 40s, read: 203.1 MiB/s, write: 58.9 MiB/s
INFO:  36% (92.3 GiB of 256.0 GiB) in 14m 43s, read: 2.1 GiB/s, write: 15.1 MiB/s
INFO:  37% (94.7 GiB of 256.0 GiB) in 15m 25s, read: 59.4 MiB/s, write: 55.4 MiB/s
INFO:  38% (97.3 GiB of 256.0 GiB) in 16m 15s, read: 52.4 MiB/s, write: 50.3 MiB/s
INFO:  39% (99.9 GiB of 256.0 GiB) in 17m 5s, read: 52.9 MiB/s, write: 50.5 MiB/s
INFO:  40% (102.4 GiB of 256.0 GiB) in 18m 2s, read: 45.7 MiB/s, write: 43.3 MiB/s
INFO:  41% (105.0 GiB of 256.0 GiB) in 19m 9s, read: 38.8 MiB/s, write: 36.4 MiB/s
INFO:  42% (107.5 GiB of 256.0 GiB) in 20m 8s, read: 44.7 MiB/s, write: 42.0 MiB/s
INFO:  43% (110.1 GiB of 256.0 GiB) in 21m 19s, read: 37.2 MiB/s, write: 35.7 MiB/s
INFO:  44% (112.6 GiB of 256.0 GiB) in 22m 23s, read: 40.4 MiB/s, write: 38.8 MiB/s
INFO:  45% (116.4 GiB of 256.0 GiB) in 23m 14s, read: 74.8 MiB/s, write: 46.9 MiB/s
INFO:  46% (118.4 GiB of 256.0 GiB) in 23m 39s, read: 84.1 MiB/s, write: 41.0 MiB/s
INFO:  49% (127.7 GiB of 256.0 GiB) in 23m 42s, read: 3.1 GiB/s, write: 5.3 KiB/s
INFO:  50% (128.5 GiB of 256.0 GiB) in 23m 45s, read: 276.0 MiB/s, write: 46.7 MiB/s
INFO:  51% (130.6 GiB of 256.0 GiB) in 24m 5s, read: 106.9 MiB/s, write: 68.4 MiB/s
INFO:  52% (133.4 GiB of 256.0 GiB) in 24m 15s, read: 294.2 MiB/s, write: 44.3 MiB/s
INFO:  56% (145.6 GiB of 256.0 GiB) in 24m 18s, read: 4.0 GiB/s, write: 9.3 KiB/s
INFO:  57% (146.4 GiB of 256.0 GiB) in 24m 21s, read: 272.5 MiB/s, write: 90.9 MiB/s
INFO:  58% (150.5 GiB of 256.0 GiB) in 24m 32s, read: 382.8 MiB/s, write: 38.9 MiB/s
INFO:  63% (162.0 GiB of 256.0 GiB) in 24m 35s, read: 3.8 GiB/s, write: 13.3 KiB/s
INFO:  64% (165.9 GiB of 256.0 GiB) in 24m 48s, read: 309.3 MiB/s, write: 42.7 MiB/s
INFO:  69% (177.4 GiB of 256.0 GiB) in 24m 51s, read: 3.8 GiB/s, write: 9.3 KiB/s
INFO:  72% (186.2 GiB of 256.0 GiB) in 24m 54s, read: 2.9 GiB/s, write: 21.6 MiB/s
INFO:  75% (194.2 GiB of 256.0 GiB) in 24m 57s, read: 2.7 GiB/s, write: 39.2 MiB/s
INFO:  79% (204.6 GiB of 256.0 GiB) in 25m, read: 3.5 GiB/s, write: 6.8 MiB/s
INFO:  82% (212.2 GiB of 256.0 GiB) in 25m 3s, read: 2.5 GiB/s, write: 21.6 MiB/s
INFO:  87% (223.2 GiB of 256.0 GiB) in 25m 6s, read: 3.7 GiB/s, write: 12.0 KiB/s
INFO:  88% (226.2 GiB of 256.0 GiB) in 25m 9s, read: 1011.7 MiB/s, write: 35.3 MiB/s
INFO:  89% (228.3 GiB of 256.0 GiB) in 25m 30s, read: 102.2 MiB/s, write: 43.0 MiB/s
INFO:  93% (239.4 GiB of 256.0 GiB) in 25m 33s, read: 3.7 GiB/s, write: 36.0 KiB/s
INFO:  94% (242.1 GiB of 256.0 GiB) in 25m 36s, read: 930.4 MiB/s, write: 61.2 MiB/s
INFO:  95% (243.2 GiB of 256.0 GiB) in 25m 51s, read: 72.8 MiB/s, write: 65.4 MiB/s
INFO:  96% (246.4 GiB of 256.0 GiB) in 26m 21s, read: 109.7 MiB/s, write: 44.8 MiB/s
INFO:  99% (255.9 GiB of 256.0 GiB) in 26m 24s, read: 3.1 GiB/s, write: 6.0 MiB/s
INFO: 100% (256.0 GiB of 256.0 GiB) in 26m 28s, read: 36.0 MiB/s, write: 34.6 MiB/s
INFO: backup is sparse: 184.21 GiB (71%) total zero data
INFO: transferred 256.00 GiB in 1588 seconds (165.1 MiB/s)
WARN: error removing fleecing image 'local-zfs:vm-10000-fleece-0' - zfs error: cannot destroy 'rpool/data/vm-10000-fleece-0': dataset is busy
INFO: archive file size: 23.67GB
INFO: adding notes to backup
INFO: prune older backups with retention: keep-daily=3, keep-monthly=3, keep-weekly=3
INFO: removing backup 'storage:backup/vzdump-qemu-10000-2024_04_25-07_01_33.vma.zst'
INFO: pruned 1 backup(s) not covered by keep-retention policy
INFO: Finished Backup of VM 10000 (00:26:45)
INFO: Backup finished at 2024-05-02 07:31:15

I'm not sure why the fleece dataset would still be busy after the backup completed. Any help with avoiding this in future would be greatly appreciated!

Thanks.
It might be the same issue as reported here: https://bugzilla.proxmox.com/show_bug.cgi?id=5440 (caused by a low timeout). Can you check if you have a message similar to
Code:
May 03 06:28:03 maurice pvescheduler[3386096]: VM 150 qmp command failed - VM 150 qmp command 'human-monitor-command' failed - got timeout
in your system logs/journal?
 
Thanks. Yep! Same exact error (aligns with time in backup logs above):

Code:
May 02 07:31:05 pve pvescheduler[1791230]: VM 10000 qmp command failed - VM 10000 qmp command 'human-monitor-command' failed - got timeout
 
Last edited:
Having a similar issue. Fleecing datasets of two VMs remained after an interrupted backup (I believe I cancelled the backup).
Now... If I understand fleecing correctly, it's not used for VM writes, right ? The only data that goes there is the data that is READ from the original storage in case when new data needs to be written to the original storage - meaning all data that is WRITTEN is always written to the original storage. Is that correct ?

If so, it should then be safe to manually remove the fleecing image ? No data will be lost ?
Is that all I need to do ? Because I see when trying to delete the fleecing dataset I get a warning that it's in use.

My question: how do I properly clean this up ?
 
Update: after powering off the VM the fleecing image is no longer locked and I was able to delete it, in order to make backups possible again.

I am still interested if this is all I needed to do, or perhaps something more is needed/advised ?
 
Hi,
Update: after powering off the VM the fleecing image is no longer locked and I was able to delete it, in order to make backups possible again.

I am still interested if this is all I needed to do, or perhaps something more is needed/advised ?
removing the fleecing image is enough.
 
  • Like
Reactions: lucius_the

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!