Backup of VM failed - backup write data failed: failed: ENOSPC: No space left on device

Thatoo

Member
Jun 11, 2021
62
1
13
38
I'm facing this issue and I'm struggling to solve it.
I can't even make Garbage collect as I gives me the same issue too.
I'll try moving some chunk files in order to be able to make a Garbage collect but it isn't a pleasant task and it is very stressfull.

I wish there would be an option to forbid/prevent the backup process to keep on writing on the ZFS storage when the ZFS storage reach something like 5 Gb of free space (forbid to full the disk space) in order to always let some space for GC to be done.
 
Last edited:
I wish there would be an option to forbid/prevent the backup process to keep on writing on the ZFS storage when the ZFS storage reach something like 5 Gb of free space

There is a manual "poor man's disk-full-protection" for ZFS. Initialize it like this, with any fitting capacity - here 10 GByte:
Code:
~# zfs create -o reservation=10G rpool/diskfullsafeguard

And in case to run into that problem simply drop it
Code:
~# zfs destroy rpool/diskfullsafeguard

(Or "zfs set reservation=0G rpool/diskfullsafeguard", solve your problem and then set it back to 10 GB...)

---
Edit, added next day: a (general purpose) ZFS pool should never get filled above 80%. (For a pool being used for virtual disks for VMs I've found lower recommendations - going down to as low as 50%!) Taking this into account, the "actually empty but nominally occupied reservation" should be 20 percent of the usable capacity, or more. This makes sense only because that space is reserved but the blocks "below" that are not really occupied/referenced by any actual data - they are available for ZFS to do its job.
Disclaimer, as so often: this is my understanding - anybody may prove me wrong...
 
Last edited:
  • Like
Reactions: MarkusKo and Thatoo
Thank you so much for these information. I'll keep them very preciously.
 
Last edited:
  • Like
Reactions: UdoB
I'm in a situation I don't understand and I don't know what to do.

To prevent it happen again, I deleted little by little all content from my Datastore. Deleting one didn't do anything, neither the second and so on... And now I have deleted all, the storage remain full.
I can't even do
Code:
zfs create -o reservation=5G rpool/diskfullsafeguard

Here is what PBS show me :

1746992693971.png

I did accept to loose all backup and restart from tonight but it seems that in the current situation, even my automatic backup tonight won't be possible as the storage remain full.

Any idea how i can reclaim space?
 
Here are logs of the Garbage Collect :

2025-05-11T20:53:09+02:00: starting garbage collection on store XXX



2025-05-11T20:53:09+02:00: Start GC phase1 (mark used chunks)
2025-05-11T20:53:09+02:00: Start GC phase2 (sweep unused chunks)
2025-05-11T20:53:24+02:00: processed 1% (3624 chunks)
2025-05-11T20:53:46+02:00: processed 2% (7196 chunks)
2025-05-11T20:54:09+02:00: processed 3% (10725 chunks)
2025-05-11T20:54:31+02:00: processed 4% (14340 chunks)
2025-05-11T20:54:53+02:00: processed 5% (17980 chunks)
2025-05-11T20:55:15+02:00: processed 6% (21629 chunks)
2025-05-11T20:55:37+02:00: processed 7% (25381 chunks)
2025-05-11T20:55:59+02:00: processed 8% (28945 chunks)
2025-05-11T20:56:21+02:00: processed 9% (32567 chunks)
2025-05-11T20:56:43+02:00: processed 10% (36219 chunks)
2025-05-11T20:57:05+02:00: processed 11% (39939 chunks)
2025-05-11T20:57:27+02:00: processed 12% (43672 chunks)
2025-05-11T20:57:49+02:00: processed 13% (47360 chunks)
2025-05-11T20:58:11+02:00: processed 14% (51017 chunks)
2025-05-11T20:58:33+02:00: processed 15% (54649 chunks)
2025-05-11T20:58:55+02:00: processed 16% (58212 chunks)
2025-05-11T20:59:17+02:00: processed 17% (61876 chunks)
2025-05-11T20:59:39+02:00: processed 18% (65533 chunks)
2025-05-11T21:00:01+02:00: processed 19% (69221 chunks)
2025-05-11T21:00:23+02:00: processed 20% (72840 chunks)
2025-05-11T21:00:45+02:00: processed 21% (76498 chunks)
2025-05-11T21:01:07+02:00: processed 22% (80180 chunks)
2025-05-11T21:01:29+02:00: processed 23% (83864 chunks)
2025-05-11T21:01:51+02:00: processed 24% (87502 chunks)
2025-05-11T21:02:13+02:00: processed 25% (91067 chunks)
2025-05-11T21:02:35+02:00: processed 26% (94732 chunks)
2025-05-11T21:02:57+02:00: processed 27% (98461 chunks)
2025-05-11T21:03:20+02:00: processed 28% (102074 chunks)
2025-05-11T21:03:42+02:00: processed 29% (105759 chunks)
2025-05-11T21:04:04+02:00: processed 30% (109369 chunks)
2025-05-11T21:04:26+02:00: processed 31% (113045 chunks)
2025-05-11T21:04:48+02:00: processed 32% (116704 chunks)
2025-05-11T21:05:10+02:00: processed 33% (120353 chunks)
2025-05-11T21:05:32+02:00: processed 34% (123943 chunks)
2025-05-11T21:05:54+02:00: processed 35% (127547 chunks)
2025-05-11T21:06:15+02:00: processed 36% (131243 chunks)
2025-05-11T21:06:37+02:00: processed 37% (134912 chunks)
2025-05-11T21:06:59+02:00: processed 38% (138559 chunks)
2025-05-11T21:07:21+02:00: processed 39% (142101 chunks)
2025-05-11T21:07:43+02:00: processed 40% (145742 chunks)
2025-05-11T21:08:05+02:00: processed 41% (149383 chunks)
2025-05-11T21:08:27+02:00: processed 42% (153137 chunks)
2025-05-11T21:08:49+02:00: processed 43% (156821 chunks)
2025-05-11T21:09:11+02:00: processed 44% (160376 chunks)
2025-05-11T21:09:33+02:00: processed 45% (163966 chunks)
2025-05-11T21:09:55+02:00: processed 46% (167767 chunks)
2025-05-11T21:10:17+02:00: processed 47% (171372 chunks)
2025-05-11T21:10:39+02:00: processed 48% (175044 chunks)
2025-05-11T21:11:01+02:00: processed 49% (178588 chunks)
2025-05-11T21:11:23+02:00: processed 50% (182384 chunks)
2025-05-11T21:11:45+02:00: processed 51% (186000 chunks)
2025-05-11T21:12:07+02:00: processed 52% (189717 chunks)
2025-05-11T21:12:30+02:00: processed 53% (193443 chunks)
2025-05-11T21:12:52+02:00: processed 54% (197012 chunks)
2025-05-11T21:13:13+02:00: processed 55% (200658 chunks)
2025-05-11T21:13:35+02:00: processed 56% (204317 chunks)
2025-05-11T21:13:57+02:00: processed 57% (207886 chunks)
2025-05-11T21:14:19+02:00: processed 58% (211512 chunks)
2025-05-11T21:14:41+02:00: processed 59% (215112 chunks)
2025-05-11T21:15:03+02:00: processed 60% (218781 chunks)
2025-05-11T21:15:26+02:00: processed 61% (222427 chunks)
2025-05-11T21:15:48+02:00: processed 62% (226099 chunks)
2025-05-11T21:16:10+02:00: processed 63% (229813 chunks)
2025-05-11T21:16:32+02:00: processed 64% (233548 chunks)
2025-05-11T21:16:54+02:00: processed 65% (237315 chunks)
2025-05-11T21:17:15+02:00: processed 66% (241009 chunks)
2025-05-11T21:17:37+02:00: processed 67% (244601 chunks)
2025-05-11T21:17:59+02:00: processed 68% (248293 chunks)
2025-05-11T21:18:21+02:00: processed 69% (252021 chunks)
2025-05-11T21:18:43+02:00: processed 70% (255730 chunks)
2025-05-11T21:19:05+02:00: processed 71% (259361 chunks)
2025-05-11T21:19:27+02:00: processed 72% (263076 chunks)
2025-05-11T21:19:49+02:00: processed 73% (266618 chunks)
2025-05-11T21:20:10+02:00: processed 74% (270314 chunks)
2025-05-11T21:20:32+02:00: processed 75% (273834 chunks)
2025-05-11T21:20:54+02:00: processed 76% (277525 chunks)
2025-05-11T21:21:16+02:00: processed 77% (281202 chunks)
2025-05-11T21:21:38+02:00: processed 78% (284927 chunks)
2025-05-11T21:21:59+02:00: processed 79% (288590 chunks)
2025-05-11T21:22:22+02:00: processed 80% (292276 chunks)
2025-05-11T21:22:43+02:00: processed 81% (295862 chunks)
2025-05-11T21:23:05+02:00: processed 82% (299514 chunks)
2025-05-11T21:23:27+02:00: processed 83% (303221 chunks)
2025-05-11T21:23:49+02:00: processed 84% (306820 chunks)
2025-05-11T21:24:10+02:00: processed 85% (310383 chunks)
2025-05-11T21:24:32+02:00: processed 86% (314122 chunks)
2025-05-11T21:24:54+02:00: processed 87% (317843 chunks)
2025-05-11T21:25:16+02:00: processed 88% (321486 chunks)
2025-05-11T21:25:38+02:00: processed 89% (325211 chunks)
2025-05-11T21:26:02+02:00: processed 90% (328859 chunks)
2025-05-11T21:26:24+02:00: processed 91% (332446 chunks)
2025-05-11T21:26:46+02:00: processed 92% (336097 chunks)
2025-05-11T21:27:08+02:00: processed 93% (339842 chunks)
2025-05-11T21:27:31+02:00: processed 94% (343453 chunks)
2025-05-11T21:27:53+02:00: processed 95% (347222 chunks)
2025-05-11T21:28:14+02:00: processed 96% (350770 chunks)
2025-05-11T21:28:36+02:00: processed 97% (354371 chunks)
2025-05-11T21:28:58+02:00: processed 98% (358035 chunks)
2025-05-11T21:29:21+02:00: processed 99% (361814 chunks)
2025-05-11T21:29:40+02:00: Removed garbage: 0 B
2025-05-11T21:29:40+02:00: Removed chunks: 0
2025-05-11T21:29:40+02:00: Pending removals: 888.038 GiB (in 365444 chunks)
2025-05-11T21:29:40+02:00: Original data usage: 0 B
2025-05-11T21:29:40+02:00: On-Disk chunks: 0
2025-05-11T21:29:40+02:00: Deduplication factor: 1.00
2025-05-11T21:29:40+02:00: queued notification (id=006e3ffe-1e50-4e05-a76d-2eaaa938374a)
2025-05-11T21:29:40+02:00: TASK OK

But it does nothing. there are still plenty of chunks in /mnt/datastore/XXX/.chunks/
I don't know what to do to get back an empty datastore before the tonight backup.
 
Hi,

But it does nothing. there are still plenty of chunks in /mnt/datastore/XXX/.chunks/
By default the "GC Access-Time Cutoff" is 24h and 5m, that mean the chunk are only deleted after 24h and 5m is they are not used anymore.

You can change this setting temporarily to force the CG.
1746994451545.png
1746994494136.png

Best regards,
 
  • Like
Reactions: UdoB and Thatoo
Hey ... um ... if you started manually deleting chunks, then I wholeheartedly support you actually DELETING the datastore and starting over.

Click this button.

1747000051258.png

Its going to ask if you want to preserve your jobs. I've tried that, and didn't like the results.
I'd recommend the full delete.
Then re-add the datastore. Don't do the Advanced 'already existing' option. You want it wiped.
And then setup permissions and jobs over again.

---------------------------------
I'm totally with Udo's goal for the reserved space to prevent this happening in the future.
This is a minor difference, but I think its just simpler than what he recommended.

First off, if your datastore shares storage with root, you should do this so PBS can't stomp on root.
zfs set reservation=40G rpool/ROOT

And then (this is the simpler part) set a reservation on your backup partition.
I think this has the same end result, 40G is reserved and can be freed up if the disk is filled.
zfs set reservation=40G rpool/backup

And if you have a disaster (much like Udo's plan) drop the reservation so GC can operate.
zfs set reservation=0G rpool/backup
 
Last edited:
  • Like
Reactions: carles89 and UdoB