ENOSPC: No space left on device


Aug 8, 2023
Milan, Italy

Sorry if it's a stupid question but I can't solve it by myself.
I just created a couple of weeks ago a new PBS server (version 3.0.3), a physical machine with 1 TB of storage (single SSD)

I set the automatic prune process to keep last 7 days of backup, we floated around 90 % of space used for some days. Unfortunately, last night pbs finished space and all backup fails.

Now i see pbs disk usage at 100.00 % and I don't know how to unlock the situation: plan was to change backup retention to only 5 days, I tried but how can I manually delete days 6 and 7 ?

I tried in datastore -> name_of_my_datastore -> content -> prune all -> Keep last 5 -> Prune -> but i receive error "ENOSPC: No space left on device"
i tried to manually delete some less important backup using the red recycle bin icon, it works but pbs is still at 100 % usage, no real file deleted.
I tried to manually start Garbage collection but i receive error "unable to start garbage collection job on datastore pbs - ENOSPC: No space left on device"
I can't even connect to shell because of full disk

How can I clean something ?

Nobody can help us?
Now it's difficult even to use web gui, i keep receiving popup saying pbs has no space for tmp files.
I think I can't manually delete backup from file system to avoid corruption.

Any help is apreciated. Thanks
if your datastore, etc. is on zfs on you're using 100%, the only solution is to delete files from it to make free space or to extend the pool to gain free space
garbage collection won't work since that needs to update the 'atime' of files which needs a bit of space on zfs to work

in general: i'd recommend never letting your storage (regardless of type) run full.
Sorry, don't want to be rude, but this was already clear.
Yes, file system is zfs, how can i manually delete files?
Directly on file system, deleting only older folder in /mnt/.chunks ? Or deleting all the content of /mnt/.chunks and rebuild all backup from scratch?
As I wrote I can't do anything on web gui.

Last edited:
Directly on file system, deleting only older folder in /mnt/.chunks ?
thats one option, but know that if you delete random chunks, it will corrupt backups that reference that chunk (and will make the backup unrestorable)

if you don't have any other data on the pool, i'd start deleting snapshots (not the chunk) that you don't need anymore and see if that is enough free space to let the garbage collect run through
only if you don't have any other choice, would i delete random chunks (or the whole datastore)

the other option is to attach a disk to the zpool temporarily, let the gc run and remove it again afterwards with 'zpool remove' (though i must admit i did not need this yet, so i would test that procedure ,e.g. in a vm, beforehand so you can be sure it works like you want to)
Ok, so I decided to manually clean some old chunks (from 10th october), then I immediately regain control of web gui and commands which is a good first step.

However, I tried a manual garbage collection and I discover that some files I deleted was used by other backup made even after 10th of october. Maybe because of deduplication, as far as I understand.

So i decided to delete all the backups and recreate them from scratch to avoid confusion. I used the red recycle bin icon next to every VM, then I run a garbage collection. What i see now is absolutely zero backup but still 920 gb in use.

2023-10-20T15:08:23+02:00: starting garbage collection on store pbs
2023-10-20T15:08:23+02:00: Start GC phase1 (mark used chunks)
2023-10-20T15:08:23+02:00: Start GC phase2 (sweep unused chunks)
2023-10-20T15:08:23+02:00: processed 1% (5078 chunks)
2023-10-20T15:08:23+02:00: processed 2% (10121 chunks)
2023-10-20T15:08:24+02:00: processed 3% (15279 chunks)
2023-10-20T15:08:24+02:00: processed 4% (20333 chunks)
2023-10-20T15:08:24+02:00: processed 5% (25274 chunks)
2023-10-20T15:08:24+02:00: processed 6% (30269 chunks)
2023-10-20T15:08:24+02:00: processed 7% (35311 chunks)
2023-10-20T15:08:24+02:00: processed 8% (40336 chunks)
2023-10-20T15:08:24+02:00: processed 9% (45405 chunks)
2023-10-20T15:08:24+02:00: processed 10% (50364 chunks)
2023-10-20T15:08:24+02:00: processed 11% (55459 chunks)
2023-10-20T15:08:24+02:00: processed 12% (60643 chunks)
2023-10-20T15:08:24+02:00: processed 13% (65641 chunks)
2023-10-20T15:08:24+02:00: processed 14% (70745 chunks)
2023-10-20T15:08:24+02:00: processed 15% (75890 chunks)
2023-10-20T15:08:24+02:00: processed 16% (81014 chunks)
2023-10-20T15:08:24+02:00: processed 17% (86133 chunks)
2023-10-20T15:08:24+02:00: processed 18% (91190 chunks)
2023-10-20T15:08:24+02:00: processed 19% (96294 chunks)
2023-10-20T15:08:24+02:00: processed 20% (101423 chunks)
2023-10-20T15:08:24+02:00: processed 21% (106448 chunks)
2023-10-20T15:08:24+02:00: processed 22% (111438 chunks)
2023-10-20T15:08:24+02:00: processed 23% (116550 chunks)
2023-10-20T15:08:24+02:00: processed 24% (121479 chunks)
2023-10-20T15:08:24+02:00: processed 25% (126676 chunks)
2023-10-20T15:08:24+02:00: processed 26% (131877 chunks)
2023-10-20T15:08:24+02:00: processed 27% (136884 chunks)
2023-10-20T15:08:24+02:00: processed 28% (141878 chunks)
2023-10-20T15:08:24+02:00: processed 29% (147073 chunks)
2023-10-20T15:08:24+02:00: processed 30% (152035 chunks)
2023-10-20T15:08:25+02:00: processed 31% (156967 chunks)
2023-10-20T15:08:25+02:00: processed 32% (162029 chunks)
2023-10-20T15:08:25+02:00: processed 33% (167006 chunks)
2023-10-20T15:08:25+02:00: processed 34% (172114 chunks)
2023-10-20T15:08:25+02:00: processed 35% (177157 chunks)
2023-10-20T15:08:25+02:00: processed 36% (182170 chunks)
2023-10-20T15:08:25+02:00: processed 37% (187250 chunks)
2023-10-20T15:08:25+02:00: processed 38% (192271 chunks)
2023-10-20T15:08:25+02:00: processed 39% (197112 chunks)
2023-10-20T15:08:25+02:00: processed 40% (202063 chunks)
2023-10-20T15:08:25+02:00: processed 41% (207088 chunks)
2023-10-20T15:08:25+02:00: processed 42% (212108 chunks)
2023-10-20T15:08:25+02:00: processed 43% (217179 chunks)
2023-10-20T15:08:25+02:00: processed 44% (222191 chunks)
2023-10-20T15:08:25+02:00: processed 45% (227340 chunks)
2023-10-20T15:08:25+02:00: processed 46% (232380 chunks)
2023-10-20T15:08:25+02:00: processed 47% (237533 chunks)
2023-10-20T15:08:25+02:00: processed 48% (242585 chunks)
2023-10-20T15:08:25+02:00: processed 49% (247618 chunks)
2023-10-20T15:08:25+02:00: processed 50% (252749 chunks)
2023-10-20T15:08:25+02:00: processed 51% (257787 chunks)
2023-10-20T15:08:25+02:00: processed 52% (262848 chunks)
2023-10-20T15:08:25+02:00: processed 53% (267894 chunks)
2023-10-20T15:08:25+02:00: processed 54% (273033 chunks)
2023-10-20T15:08:25+02:00: processed 55% (278114 chunks)
2023-10-20T15:08:25+02:00: processed 56% (283190 chunks)
2023-10-20T15:08:25+02:00: processed 57% (288226 chunks)
2023-10-20T15:08:26+02:00: processed 58% (293312 chunks)
2023-10-20T15:08:26+02:00: processed 59% (298355 chunks)
2023-10-20T15:08:26+02:00: processed 60% (303475 chunks)
2023-10-20T15:08:26+02:00: processed 61% (308458 chunks)
2023-10-20T15:08:26+02:00: processed 62% (313593 chunks)
2023-10-20T15:08:26+02:00: processed 63% (318782 chunks)
2023-10-20T15:08:26+02:00: processed 64% (323882 chunks)
2023-10-20T15:08:26+02:00: processed 65% (328938 chunks)
2023-10-20T15:08:26+02:00: processed 66% (334091 chunks)
2023-10-20T15:08:26+02:00: processed 67% (339247 chunks)
2023-10-20T15:08:26+02:00: processed 68% (344217 chunks)
2023-10-20T15:08:26+02:00: processed 69% (349176 chunks)
2023-10-20T15:08:26+02:00: processed 70% (354183 chunks)
2023-10-20T15:08:26+02:00: processed 71% (359253 chunks)
2023-10-20T15:08:26+02:00: processed 72% (364235 chunks)
2023-10-20T15:08:26+02:00: processed 73% (369251 chunks)
2023-10-20T15:08:26+02:00: processed 74% (374329 chunks)
2023-10-20T15:08:26+02:00: processed 75% (379311 chunks)
2023-10-20T15:08:26+02:00: processed 76% (384408 chunks)
2023-10-20T15:08:26+02:00: processed 77% (389423 chunks)
2023-10-20T15:08:26+02:00: processed 78% (394439 chunks)
2023-10-20T15:08:26+02:00: processed 79% (399595 chunks)
2023-10-20T15:08:26+02:00: processed 80% (404675 chunks)
2023-10-20T15:08:26+02:00: processed 81% (409740 chunks)
2023-10-20T15:08:26+02:00: processed 82% (414814 chunks)
2023-10-20T15:08:26+02:00: processed 83% (419859 chunks)
2023-10-20T15:08:26+02:00: processed 84% (425084 chunks)
2023-10-20T15:08:26+02:00: processed 85% (430174 chunks)
2023-10-20T15:08:26+02:00: processed 86% (435382 chunks)
2023-10-20T15:08:27+02:00: processed 87% (440378 chunks)
2023-10-20T15:08:27+02:00: processed 88% (445482 chunks)
2023-10-20T15:08:27+02:00: processed 89% (450491 chunks)
2023-10-20T15:08:27+02:00: processed 90% (455417 chunks)
2023-10-20T15:08:27+02:00: processed 91% (460606 chunks)
2023-10-20T15:08:27+02:00: processed 92% (465638 chunks)
2023-10-20T15:08:27+02:00: processed 93% (470756 chunks)
2023-10-20T15:08:27+02:00: processed 94% (475771 chunks)
2023-10-20T15:08:27+02:00: processed 95% (480788 chunks)
2023-10-20T15:08:27+02:00: processed 96% (485802 chunks)
2023-10-20T15:08:27+02:00: processed 97% (490839 chunks)
2023-10-20T15:08:27+02:00: processed 98% (495876 chunks)
2023-10-20T15:08:27+02:00: processed 99% (500877 chunks)
2023-10-20T15:08:27+02:00: Removed garbage: 0 B
2023-10-20T15:08:27+02:00: Removed chunks: 0
2023-10-20T15:08:27+02:00: Pending removals: 856.358 GiB (in 505904 chunks)
2023-10-20T15:08:27+02:00: Original data usage: 0 B
2023-10-20T15:08:27+02:00: On-Disk chunks: 0
2023-10-20T15:08:27+02:00: Deduplication factor: 1.00
2023-10-20T15:08:27+02:00: TASK OK

Why there are 856 gb of pending removal? How to force this deletion to start recreate backup?



  • 0.JPG
    23.5 KB · Views: 21
Find the reply by myself in the guide:

The garbage collection will only remove chunks that haven't been usedfor at least one day (exactly 24h 5m). This grace period is necessary becausechunks in use are marked by touching the chunk which updates the atime(access time) property. Filesystems are mounted with the relatime optionby default. This results in a better performance by only updating theatime property if the last access has been at least 24 hours ago. Thedownside is that touching a chunk within these 24 hours will not alwaysupdate its atime property.
Chunks in the grace period will be logged at the end of the garbagecollection task as Pending removals.

So my question now is: Can I start create the new backup or I have to wait 24 h? The deleted chuck still count as occupied space? there's no option to manually force Garbage collector immediately?

Tried to launch some backup, everything fails with "backup write data failed: command error: write_data upload error: pipelined request failed: inserting chunk on store 'pbs' failed for 487f8a49d3d5bc5a285954461f458c8c30710957493575099bf55ce3849b1298 - mkstemp "/mnt/.chunks/487f/487f8a49d3d5bc5a285954461f458c8c30710957493575099bf55ce3849b1298.tmp_XXXXXX" failed: ENOENT: No such file or directory"

So I deleted and recreate datastore. Now I'm rebuilding all the backup and system seems to work.

Best solution? NO.
If I had important data on those backup I can't simply delete the whole datastore to solve a "stupid" problem like full storage, however it seems Ok now. I'm surprised PBS doesn't use some sort of auto-defense system to halt all backup when 99.5 % of space is used. Backup will fail anyway but at least we keep the ability to use the GUI and send command to clean space.

  • Like
Reactions: frankz
I too had the same problem and I agree with the request to be able to set a safety margin to be able to recover in case the system goes "out of space"


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!