[SOLVED] Force delete of "Pending removals"

I am not sure what you did now
I did nothing now no worry . As i can do absolutely nothing ....

Deleting the checkpoints folder is not freeing any space. Not a single byte.

So the datastore is full and there nothing we can do except installing a new enclosure . New drives create the stores the name space the permissions and transfer everything ? By sync?

That is seriously ridiculus concept of the backup system you shoud in futur relaae create a reservation by default for your app to prevent that.

We are not linux guys but not stupid either and its hard to believe after working for 15 years on hyperv and vmware that moving with proxmox can end in such a wall

We were in planning of adding a pbs for 1-2-3 but anyway not been able to make this pbs working without spending 3 days to rebuild every thing is really annoying

Wow..
 
Last edited:
Try to run a zpool trim YourPool and see if it reduces refreservation. Also make sure you got not ZFS snapshots.

And keep in mind that ZFS is a copy-on-write filesystem. So it needs space to write stuff in order to delete something. You can brick a 100% full ZFS pool where it then isn't possible to delete anything until you add more disks to have some free space to remove stuff. Another reason why you set set up monitoring and quotas.
 
Try to run a zpool trim YourPool and see if it reduces refreservation. Also make sure you got not ZFS snapshots.

And keep in mind that ZFS is a copy-on-write filesystem. So it needs space to write stuff in order to delete something. You can brick a 100% full ZFS pool where it then isn't possible to delete anything until you add more disks to have some free space to remove stuff. Another reason why you set set up monitoring and quotas.

about monitoring , sure we will and put a quota as well.
Correct me if im wrong the only way to expand my Raidz2 4 disks is currently to : create another RaidZ2 and '' add it '' to the existing raidz2-0 ?

or i can create another mirror / raidz1 of let say 2 disks and then add it to the raidz2 temporary until i remove enough space to fill only the previous quota ?

i started the triming but i can't get a status on the progression.
 
about monitoring , sure we will and put a quota as well.
Correct me if im wrong the only way to expand my Raidz2 4 disks is currently to : create another RaidZ2 and '' add it '' to the existing raidz2-0 ?

or i can create another mirror / raidz1 of let say 2 disks and then add it to the raidz2 temporary until i remove enough space to fill only the previous quota ?

i started the triming but i can't get a status on the progression.
You can add it, but you can't delete it later because your top-level vdev contains a raidz2. See the manual: https://openzfs.github.io/openzfs-docs/man/8/zpool-remove.8.html
A single disk striped with a raid6 would be really bad...
Only good thing about a raidz2 is that 2 disks may fail without data loss. Lose a single vdev and all data of the pool is lost. So when expanding it you want to expand it with at least the same reliability, so minimum a 3-disk-mirror or a 4-disk-raidz2.
 
You can add it, but you can't delete it later because your top-level vdev contains a raidz2. See the manual: https://openzfs.github.io/openzfs-docs/man/8/zpool-remove.8.html
A single disk striped with a raid6 would be really bad...
Only good thing about a raidz2 is that 2 disks may fail without data loss. Lose a single vdev and all data of the pool is lost. So when expanding it you want to expand it with at least the same reliability, so minimum a 3-disk-mirror or a 4-disk-raidz2.
Been able then to expand a RAIDZ2 remain the best possible scenario, yet possible sadly .
do you have any other recommendation for me to have the best expandable ZFS scenario that will not consume more than 2 disk for redudancy when expanding or we all need to wait until the attach function been public released ?
 
Most people use striped mirrors. Better IOPS performance, faster to resilver, easier to extend, possible to remove vdevs and when striping 3-disk-mirrors you get the same reliability like a raidz2. But yes, you will then lose 66% of your raw capacity. Still the best option...at least when using HDDs.
And it is usually recommended to build your pool by striping multiple small raidz1/2/3 instead of a single big vdev. So with 18 disks you would for example create a 6x 3 disk raidz1 or 3x 6 disk raidz2 or 2x 9 disk raidz3 and not a single 18 disk raidz1/2/3, even if that would save money because of less parity overhead. With that in mind it, even if it would be possible yet extend a raidz1/2/3 vdev, it would still be better to add more raidz1/2/3 vdevs instead of extending an exsting vdev.

Keep in mind that PVE needs IOPS performance and IOPS performance only scales with the number of vdevs, not the number of disks. A single vdev 100 disk raidz2 pool is as slow as a single disk as it only got one vdev.
 
Last edited:
Most people use striped mirrors. Better IOPS performance, faster to resilver, easier to extend, possible to remove vdevs and when striping 3-disk-mirrors you get the same reliability like a raidz2. But yes, you will then lose 66% of your raw capacity. Still the best option...at least when using HDDs.
And it is usually recommended to build your pool by striping multiple small raidz1/2/3 instead of a single big vdev. So with 18 disks you would for example create a 6x 3 disk raidz1 or 3x 6 disk raidz2 or 2x 9 disk raidz3 and not a single 18 disk raidz1/2/3, even if that would save money because of less parity overhead. With that in mind it, even if it would be possible yet extend a raidz1/2/3 vdev, it would still be better to add more raidz1/2/3 vdevs instead of extending an exsting vdev.

Keep in mind that PVE needs IOPS performance and IOPS performance only scales with the number of vdevs, not the number of disks. A single vdev 100 disk raidz2 pool is as slow as a single disk as it only got one vdev.

good to know. i didnt tough you could get resonable performance but you seem ton mention that they might not be so bad in Mirror similar to a raid10 i assume.
do you suggest to add a '' special device '' to the HDD pools ?
 
good to know. i didnt tough you could get resonable performance but you seem ton mention that they might not be so bad in Mirror similar to a raid10 i assume.
do you suggest to add a '' special device '' to the HDD pools ?
HDDs are still very slow compared to a SSD-only pool, even in a raid10 and with a special device. But yes, HDDs without a special device is an absolute no-go. and you want your vdevs as small as possible to have as much vdevs as possible to squeze out every bit of IOPS performance you can get.
 
@fabian

we complete the migration of the DB on a new one, Trough Sync job.

we then discovered that since we use Namespace , the Permisison are not following in a sync job as we need to devine a new owner with Max deep set to full.
our futur work around when we will configure 1-2-3 for now will be to create multiple job for each datastore.
will PBS in a near futur support preservation of existing permission ?
and can i suggest that you can set a New owner in 1 click at the namepsace level .
it will avoid us having to select 250 vm on the gui and assign a new owner.
thx :)
 
there's a change-owner command in the client in case you want to script it ;)

but sure, feel free to file enhancement requests for (if I understood you correctly?)
- recursively change the owner of all groups within a namespace/datastore
- preserve the owner when syncing/pulling (this needs high privileges, since it's basically equal to setting the owner of all transferred groups)
 
Yeah I see how this is difficult to get a clean solution for. If the minimum cutoff was 1-hour it would be better, but I like the idea of using maintenance mode, but then how do you make that work in the GUI and tasked out cleanly?

For now I ran find /pbs/local/ -exec touch -a -c -d "2023-02-28 08:00" {} \; -- took a few hours. Good job its SSDs.

Removed garbage: 2.013 TiB, backups are on the menu boys!
Thats still the only way to delete stupid chunck files, if you need to, lol

However, that command will loop through all files and individually touch them, which is a big loop and execution time of touch comes into play either.

I have a better idea, to speed that crap up by at least a factor of 20:
find /datasets/Backup-HDD-SAS/ -type d -exec sh -c 'touch -a -c -d "2023-02-28 08:00" "$0"/*' {} \;
Instead of looping through each file and executing individually touch on each file, loop through directories instead of files and touch all files at once.

it just wont touch the folders inside the .chunk folder (you'll get arguments too long), but thats a good thing, since you don't need to touch that 65536 folders anyway.

Cheers :-)
 
Code:
#!/bin/bash

base_dir="/datasets/Backup-HDD-SAS/.chunks/"

yesterday=$(date --date="yesterday" '+%Y-%m-%d %H:%M')
total_dirs=$(find "$base_dir" -mindepth 1 -maxdepth 1 -type d | wc -l)
echo "Total directories to process: $total_dirs"
processed=0

find "$base_dir" -mindepth 1 -maxdepth 1 -type d | while read dir; do
    touch -a -c -d "$yesterday" "$dir/*"
    ((processed++))
    percentage=$(($processed * 100 / $total_dirs))
    printf "\rProgress: %d%%" $percentage
done

echo -e "\nUpdate complete."

I made a simple Script with a progress Bar. Needed that myself for testing of PBS Performance.
With an progress bar you don't get spammed in the shell and still see how far it is.
And i set the date only to -24H (Yesterday), instead of some fixed value, dunno why, but anyway.

PS: If its not clear, you need to run GC afterwards, for the actual deletion.

Cheers
 
Last edited:
Code:
#!/bin/bash

base_dir="/datasets/Backup-HDD-SAS/.chunks/"

yesterday=$(date --date="yesterday" '+%Y-%m-%d %H:%M')
total_dirs=$(find "$base_dir" -mindepth 1 -maxdepth 1 -type d | wc -l)
echo "Total directories to process: $total_dirs"
processed=0

find "$base_dir" -mindepth 1 -maxdepth 1 -type d | while read dir; do
    touch -a -c -d "$yesterday" "$dir/*"
    ((processed++))
    percentage=$(($processed * 100 / $total_dirs))
    printf "\rProgress: %d%%" $percentage
done

echo -e "\nUpdate complete."

I made a simple Script with a progress Bar. Needed that myself for testing of PBS Performance.
With an progress bar you don't get spammed in the shell and still see how far it is.
And i set the date only to -24H (Yesterday), instead of some fixed value, dunno why, but anyway.

PS: If its not clear, you need to run GC afterwards, for the actual deletion.

Cheers

Doesn't work :-( Pending removals still there following a GC run 30 mins after running this.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!