All ZFS Pools Showing as 97% Full in PVE

LBP321

New Member
Mar 13, 2026
5
0
1
Hello everyone,

I use Proxmox and Proxmox Backup Server. I have a single ZFS-formatted 8TB archive HDD that I use with Nextcloud that's showing 97% full, a RAID1 ZFS 8TB HDD pool that I use with PBS for backups that's showing 97% full, and a RAID6 ZFS 8TB HDD pool used as a secondary PBS backup that is also showing 97% full. Why is this? Both PBS instances are showing around 56% full when logged-in.
 

do you have "pictures"?


Here are screenshots from my main PVE:
Main PBS.png
PVE Archive.png

Here is a screenshot from the main PBS:
Main PBS-2.png

Here is a screenshot from the remote PVE with the remote PBS's storage:

Remote PBS.png
Here is a screenshot from the remote PBS:

Remote PBS-2.png

du you run zpool trim on every pool?
I have not. What does that do?

you do not use more as 80% of your space on HDDs without a VDEV ZFS Special Device nx SSD; n>=2
I don't know what this means, I'm sorry.

And have you setup 2 proxmox jobs for you PBS:

Yes. I have a weekly job for both. Here's a screenshot:
Prune and GC.png
 
I have a single ZFS-formatted 8TB archive HDD that I use with Nextcloud that's showing 97% full,
Is that the archive one?

a RAID1 ZFS 8TB HDD pool that I use with PBS for backups that's showing 97% full,
Which one is that? Also there is no RAID1 in ZFS, do you mean mirror?

and a RAID6 ZFS 8TB HDD pool used as a secondary PBS backup that is also showing 97% full.
No RAID6 in ZFS, do you mean RAIDZ2?

Why is this? Both PBS instances are showing around 56% full when logged-in.
So I don't fully understand your current setup, what is where, and what is the problem. So I will make a wild guesses instead. Correct me if I'm wrong.
- You have a RAIDZ2 on your PVE, named PBS (thinkserver).
- That is 8TB in size
- You have a PBS VM on the thinkserver
- That VM has a RAW disk on PBS (thinkserver) with the default 16k volblocksize
- You fill that PBS with 4TB of data
- You expect the PBS (thinkserver) to now be also used by 4TB.


But that expectation is wrong. Pool geometry and padding is a thing, especially for 16k volblocks.
That is why you should not use RAIDZ for blockstorage and use mirrors instead.
Unless you really understood the topic. But even then I would not bother and go with mirrors instead.

Another possible problem could be discard not enabled on the VM disk.
 
Is that the archive one?


Which one is that? Also there is no RAID1 in ZFS, do you mean mirror?


No RAID6 in ZFS, do you mean RAIDZ2?


So I don't fully understand your current setup, what is where, and what is the problem. So I will make a wild guesses instead. Correct me if I'm wrong.
- You have a RAIDZ2 on your PVE, named PBS (thinkserver).
- That is 8TB in size
- You have a PBS VM on the thinkserver
- That VM has a RAW disk on PBS (thinkserver) with the default 16k volblocksize
- You fill that PBS with 4TB of data
- You expect the PBS (thinkserver) to now be also used by 4TB.


But that expectation is wrong. Pool geometry and padding is a thing, especially for 16k volblocks.
That is why you should not use RAIDZ for blockstorage and use mirrors instead.
Unless you really understood the topic. But even then I would not bother and go with mirrors instead.

Another possible problem could be discard not enabled on the VM disk.
Yes, the single ZFS is the archive drive. pbs (thinkserver) is the main backup ZFS mirror (thank you for the correction). And the RAIDZ2 (again, thank you) is the remote backup storage external (trashcan). I should've explained my setup earlier, but here it is:

Main server (thinkserver)
Lenovo ThinkCentre
5 Physical Drives:
- 500GB Samsung SATA SSD (local), (local-lvm) - Main PVE OS drive, VM drives, ISOs, etc.
- 4TB NVMe (four_tb) (LVM-Thin) - Nextcloud main storage
- 8TB HDD (archive) (ZFS) - Nextcloud archive storage
- 2 8TB HDDs in ZFS Mirror (pbs) - PBS backup storage

Remote server (trashcan) - Remote PVE
Trash Can Mac Pro
7 physical drives:
- Main SSD - (local), (local-lvm) - Main PVE OS drive, VM drives, ISOs, etc.
6 2TB HDDs in RAIDZ2 (external [trashcan])

Hopefully that all makes sense.
 
We are getting there. So the local PBS is a mirror and the remote PBS uses a RAIDZ2 dataset, but that should be fine, since that stores mostly 4mb chunks.

What would worry me are that ZFS and external are danger zone, and that the RAIDZ instead of mirror with only HDDs could lead to pretty bad performance.

But I don't see any obvious storage efficiency problem. Maybe we go step by step, one by one. Where do you see more storage used than expected?
 
Yeah, the backup performance is rough... It takes about 10 hours now to run a backup. I'm not in a place to spend on upgraded hardware right now though, so I'm just working with it for now.

The three pools that I have concerns with are the archive, pbs, and external (trashcan) that I use with the remote PBS. I ran df -h in the Nextcloud VM, and the archive drive is showing as 59% full. In the local PBS web GUI, it's showing 51% full and the remote PBS web GUI is showing as 53% full. In the local PVE, it's showing the archive and PBS datastores as exactly 97.27% full, and the remote PVE is showing the remote PBS datastore as 97.26% full.
 
You don't have to spend much on hardware upgrades. Even adding a simple cheap old SSD for PBS would probably do wonders.

Also do you have a slow uplink? If not, I would not even bother with a local PBS.
 
You don't have to spend much on hardware upgrades. Even adding a simple cheap old SSD for PBS would probably do wonders.

Also do you have a slow uplink? If not, I would not even bother with a local PBS.
What would you suggest for cheap old SSDs and what's your opinion on buying used SSDs?

I want to try and follow the 3-2-1 backup rule since I'm not only hosting my data, but my family's as well.

What do you think the issue is regarding the 97% full drives? And thank you, by the way, for all your help so far.
 
What would you suggest for cheap old SSDs and what's your opinion on buying used SSDs?
L2ARC is awesome, even on bad SSDs. You can use an old 256GB SSD you have lying around or get one used. As long as it is not too old or some bottom tier QLC drive or one with very few TBW, you should be fine. Google TBW of the disk you want to buy and ask the seller how much TB was already written.

What do you think the issue is regarding the 97% full drives?
I think you first of all have to decide if you current setup is worth saving. For example, the Nextcloud VM should not host the data itself in its VM RAW disk. Otherwise you force data to be in 16k volblock. Instead I would either use a local 1M dataset and access it with VirtIO (beta last time I checked) or connect a NFS share to it.

If you think the current setup is still worth saving, please start by presenting ONE example. We can do the other one later on, but start with one.
This could look something like this:

On PVE, I have the ZFS pool ExamplePool. ExamplePool consists of x HDDs in a RAIDZ and uses the default 16k volblocksize.
Also on that PVE, there is the VM ExampleVM. ExampleVM uses a RAW disc with the option discard and is on ExamplePool.
Here is the output of zpool status and zpool list
ExampleVM uses X TB and on ExamplePool I also think it uses x TB.

On PBS, I have the ZFS pool ExamplePool2. ExamplePool2 consists of x HDDs in a RAIDZ.
I have the datastore DatestoreExample, which hosts 10 backups of ExampleVM, uses X TB, and has a deduplication factor of X.
Here is the output of zpool status and zpool list.