[SOLVED] Force delete of "Pending removals"

I am not sure what you did now
I did nothing now no worry . As i can do absolutely nothing ....

Deleting the checkpoints folder is not freeing any space. Not a single byte.

So the datastore is full and there nothing we can do except installing a new enclosure . New drives create the stores the name space the permissions and transfer everything ? By sync?

That is seriously ridiculus concept of the backup system you shoud in futur relaae create a reservation by default for your app to prevent that.

We are not linux guys but not stupid either and its hard to believe after working for 15 years on hyperv and vmware that moving with proxmox can end in such a wall

We were in planning of adding a pbs for 1-2-3 but anyway not been able to make this pbs working without spending 3 days to rebuild every thing is really annoying

Wow..
 
Last edited:
Try to run a zpool trim YourPool and see if it reduces refreservation. Also make sure you got not ZFS snapshots.

And keep in mind that ZFS is a copy-on-write filesystem. So it needs space to write stuff in order to delete something. You can brick a 100% full ZFS pool where it then isn't possible to delete anything until you add more disks to have some free space to remove stuff. Another reason why you set set up monitoring and quotas.
 
Try to run a zpool trim YourPool and see if it reduces refreservation. Also make sure you got not ZFS snapshots.

And keep in mind that ZFS is a copy-on-write filesystem. So it needs space to write stuff in order to delete something. You can brick a 100% full ZFS pool where it then isn't possible to delete anything until you add more disks to have some free space to remove stuff. Another reason why you set set up monitoring and quotas.

about monitoring , sure we will and put a quota as well.
Correct me if im wrong the only way to expand my Raidz2 4 disks is currently to : create another RaidZ2 and '' add it '' to the existing raidz2-0 ?

or i can create another mirror / raidz1 of let say 2 disks and then add it to the raidz2 temporary until i remove enough space to fill only the previous quota ?

i started the triming but i can't get a status on the progression.
 
about monitoring , sure we will and put a quota as well.
Correct me if im wrong the only way to expand my Raidz2 4 disks is currently to : create another RaidZ2 and '' add it '' to the existing raidz2-0 ?

or i can create another mirror / raidz1 of let say 2 disks and then add it to the raidz2 temporary until i remove enough space to fill only the previous quota ?

i started the triming but i can't get a status on the progression.
You can add it, but you can't delete it later because your top-level vdev contains a raidz2. See the manual: https://openzfs.github.io/openzfs-docs/man/8/zpool-remove.8.html
A single disk striped with a raid6 would be really bad...
Only good thing about a raidz2 is that 2 disks may fail without data loss. Lose a single vdev and all data of the pool is lost. So when expanding it you want to expand it with at least the same reliability, so minimum a 3-disk-mirror or a 4-disk-raidz2.
 
You can add it, but you can't delete it later because your top-level vdev contains a raidz2. See the manual: https://openzfs.github.io/openzfs-docs/man/8/zpool-remove.8.html
A single disk striped with a raid6 would be really bad...
Only good thing about a raidz2 is that 2 disks may fail without data loss. Lose a single vdev and all data of the pool is lost. So when expanding it you want to expand it with at least the same reliability, so minimum a 3-disk-mirror or a 4-disk-raidz2.
Been able then to expand a RAIDZ2 remain the best possible scenario, yet possible sadly .
do you have any other recommendation for me to have the best expandable ZFS scenario that will not consume more than 2 disk for redudancy when expanding or we all need to wait until the attach function been public released ?
 
Most people use striped mirrors. Better IOPS performance, faster to resilver, easier to extend, possible to remove vdevs and when striping 3-disk-mirrors you get the same reliability like a raidz2. But yes, you will then lose 66% of your raw capacity. Still the best option...at least when using HDDs.
And it is usually recommended to build your pool by striping multiple small raidz1/2/3 instead of a single big vdev. So with 18 disks you would for example create a 6x 3 disk raidz1 or 3x 6 disk raidz2 or 2x 9 disk raidz3 and not a single 18 disk raidz1/2/3, even if that would save money because of less parity overhead. With that in mind it, even if it would be possible yet extend a raidz1/2/3 vdev, it would still be better to add more raidz1/2/3 vdevs instead of extending an exsting vdev.

Keep in mind that PVE needs IOPS performance and IOPS performance only scales with the number of vdevs, not the number of disks. A single vdev 100 disk raidz2 pool is as slow as a single disk as it only got one vdev.
 
Last edited:
Most people use striped mirrors. Better IOPS performance, faster to resilver, easier to extend, possible to remove vdevs and when striping 3-disk-mirrors you get the same reliability like a raidz2. But yes, you will then lose 66% of your raw capacity. Still the best option...at least when using HDDs.
And it is usually recommended to build your pool by striping multiple small raidz1/2/3 instead of a single big vdev. So with 18 disks you would for example create a 6x 3 disk raidz1 or 3x 6 disk raidz2 or 2x 9 disk raidz3 and not a single 18 disk raidz1/2/3, even if that would save money because of less parity overhead. With that in mind it, even if it would be possible yet extend a raidz1/2/3 vdev, it would still be better to add more raidz1/2/3 vdevs instead of extending an exsting vdev.

Keep in mind that PVE needs IOPS performance and IOPS performance only scales with the number of vdevs, not the number of disks. A single vdev 100 disk raidz2 pool is as slow as a single disk as it only got one vdev.

good to know. i didnt tough you could get resonable performance but you seem ton mention that they might not be so bad in Mirror similar to a raid10 i assume.
do you suggest to add a '' special device '' to the HDD pools ?
 
good to know. i didnt tough you could get resonable performance but you seem ton mention that they might not be so bad in Mirror similar to a raid10 i assume.
do you suggest to add a '' special device '' to the HDD pools ?
HDDs are still very slow compared to a SSD-only pool, even in a raid10 and with a special device. But yes, HDDs without a special device is an absolute no-go. and you want your vdevs as small as possible to have as much vdevs as possible to squeze out every bit of IOPS performance you can get.
 
@fabian

we complete the migration of the DB on a new one, Trough Sync job.

we then discovered that since we use Namespace , the Permisison are not following in a sync job as we need to devine a new owner with Max deep set to full.
our futur work around when we will configure 1-2-3 for now will be to create multiple job for each datastore.
will PBS in a near futur support preservation of existing permission ?
and can i suggest that you can set a New owner in 1 click at the namepsace level .
it will avoid us having to select 250 vm on the gui and assign a new owner.
thx :)
 
there's a change-owner command in the client in case you want to script it ;)

but sure, feel free to file enhancement requests for (if I understood you correctly?)
- recursively change the owner of all groups within a namespace/datastore
- preserve the owner when syncing/pulling (this needs high privileges, since it's basically equal to setting the owner of all transferred groups)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!