EXT4 reserved space

haroldas194

Member
Jan 21, 2021
5
1
23
24
When creating a container by default it creates a FS with 5% EXT4 reserved space, e.g.
Code:
tune2fs -l /dev/mapper/data--ssd-vm--999--disk--0 | awk -F: '/Block count:/ {gsub(/^[ \t]+/, "", $2); total=$2} /Reserved block count:/ {gsub(/^[ \t]+/, "", $2); res=$2} END {printf "Reserved: %.2f%%\n", (res/total)*100}'
Reserved: 5.00%
This was originally intended so that root user doesn't run out of space completely, but under containerization it would mean the real PVE root, not the container root. So this is essentially pointless.
It also can't be changed from within the container as it doesn't have actual low level FS access.
Code:
root@test-ct:~# mount | grep /dev/mapper/data--ssd-vm--999--disk--0
/dev/mapper/data--ssd-vm--999--disk--0 on / type ext4 (rw,relatime,stripe=16)
root@test-ct:~# tune2fs -m 0 /dev/mapper/data--ssd-vm--999--disk--0
tune2fs 1.47.2 (1-Jan-2025)
tune2fs: No such file or directory while trying to open /dev/mapper/data--ssd-vm--999--disk--0
Couldn't find valid filesystem superblock.
It can't also be changed from PVE directly when container is in use due (at least in LVE case) e.g.
Code:
tune2fs -m 0 /dev/mapper/data--ssd-vm--999--disk--0
tune2fs 1.47.0 (5-Feb-2023)
tune2fs: MMP: device currently active while trying to open /dev/mapper/data--ssd-vm--999--disk--0
MMP error info: node: pve, device: dm-19, updated: Sun Oct  5 21:27:16 2025
It can only be changed from PVE directly when container is stopped.
Code:
root@pve:~# tune2fs -m 0 /dev/mapper/data--ssd-vm--999--disk--0
tune2fs 1.47.0 (5-Feb-2023)
Setting reserved blocks percentage to 0% (0 blocks)
root@pve:~# tune2fs -l /dev/mapper/data--ssd-vm--999--disk--0 | awk -F: '/Block count:/ {gsub(/^[ \t]+/, "", $2); total=$2} /Reserved block count:/ {gsub(/^[ \t]+/, "", $2); res=$2} END {printf "Reserved: %.2f%%\n", (res/total)*100}'
Reserved: 0.00%

Is there any reason Proxmox creates FS with this reserved space that serves no purpose to the container besides wasting 5% space? Maybe this can be changed?
 
This was originally intended so that root user doesn't run out of space completely, but under containerization it would mean the real PVE root, not the container root. So this is essentially pointless.
I just did a test with writing /dev/random to a file inside a unprivileged container (using the container root account inside the container) and I could fill it to 100% according to df /. I don't think your statement above is true.
Code:
root@pdm7:/# dd if=/dev/random of=/test status=progress bs=1M
8801746944 bytes (8.8 GB, 8.2 GiB) copied, 28 s, 314 MB/s
dd: error writing '/test': Disk quota exceeded
8395+0 records in
8394+0 records out
8802009088 bytes (8.8 GB, 8.2 GiB) copied, 28.0667 s, 314 MB/s
root@pdm7:/# df -h /
Filesystem               Size  Used Avail Use% Mounted on
qpool/subvol-107-disk-0  9.1G  9.1G     0 100% /
root@pdm7:/# rm /test
root@pdm7:/# df -h /
Filesystem               Size  Used Avail Use% Mounted on
qpool/subvol-107-disk-0  9.0G  819M  8.3G   9% /

EDIT: My mistake, I was not using ext4, so it does not apply to the issue of this thread.
 
Last edited:
I just did a test with writing /dev/random to a file inside a unprivileged container (using the container root account inside the container) and I could fill it to 100% according to df /. I don't think your statement above is true.
Code:
root@pdm7:/# dd if=/dev/random of=/test status=progress bs=1M
8801746944 bytes (8.8 GB, 8.2 GiB) copied, 28 s, 314 MB/s
dd: error writing '/test': Disk quota exceeded
8395+0 records in
8394+0 records out
8802009088 bytes (8.8 GB, 8.2 GiB) copied, 28.0667 s, 314 MB/s
root@pdm7:/# df -h /
Filesystem               Size  Used Avail Use% Mounted on
qpool/subvol-107-disk-0  9.1G  9.1G     0 100% /
root@pdm7:/# rm /test
root@pdm7:/# df -h /
Filesystem               Size  Used Avail Use% Mounted on
qpool/subvol-107-disk-0  9.0G  819M  8.3G   9% /
Interesting. So there might be a difference in how Proxmox handles it with ZFS (which I see you use) vs LVM. I just did the same test as you did but I use LVM. New CT, default 5% reserved
Code:
root@pve:~# tune2fs -l /dev/mapper/data--ssd-vm--107--disk--0 | awk -F: '/Block count:/ {gsub(/^[ \t]+/, "", $2); total=$2} /Reserved block count:/ {gsub(/^[ \t]+/, "", $2); res=$2} END {printf "Reserved: %.2f%%\n", (res/total)*100}'
Reserved: 5.00%
And on container
Code:
root@test-ct:~# df -h
Filesystem                              Size  Used Avail Use% Mounted on
/dev/mapper/data--ssd-vm--107--disk--0  7.8G  569M  6.9G   8% /
none                                    492K  4.0K  488K   1% /dev
udev                                     16G     0   16G   0% /dev/tty
tmpfs                                    16G     0   16G   0% /dev/shm
tmpfs                                   6.3G   72K  6.3G   1% /run
tmpfs                                   5.0M     0  5.0M   0% /run/lock
tmpfs                                    16G     0   16G   0% /tmp
root@test-ct:~# dd if=/dev/urandom of=bigfile bs=1M status=progress
7251951616 bytes (7.3 GB, 6.8 GiB) copied, 30 s, 242 MB/s
dd: error writing 'bigfile': No space left on device
6970+0 records in
6969+0 records out
7308279808 bytes (7.3 GB, 6.8 GiB) copied, 30.1765 s, 242 MB/s
root@test-ct:~# df -h
Filesystem                              Size  Used Avail Use% Mounted on
/dev/mapper/data--ssd-vm--107--disk--0  7.8G  7.4G     0 100% /
none                                    492K  4.0K  488K   1% /dev
udev                                     16G     0   16G   0% /dev/tty
tmpfs                                    16G     0   16G   0% /dev/shm
tmpfs                                   6.3G   72K  6.3G   1% /run
tmpfs                                   5.0M     0  5.0M   0% /run/lock
tmpfs
Notice how it's 7.4G of 7.8G used and it's already "100%" use. And now after powering off the container, setting reserved space to 0% and then checking with df again. Same size, same used but 410M avail, 95% used.
Code:
root@test-ct:~# df -h
Filesystem                              Size  Used Avail Use% Mounted on
/dev/mapper/data--ssd-vm--107--disk--0  7.8G  7.4G  410M  95% /
none                                    492K  4.0K  488K   1% /dev
udev                                     16G     0   16G   0% /dev/tty
tmpfs                                    16G     0   16G   0% /dev/shm
tmpfs                                   6.3G   68K  6.3G   1% /run
tmpfs                                   5.0M     0  5.0M   0% /run/lock
tmpfs                                    16G     0   16G   0% /tmp
Could you maybe check with tune2fs what it reports on the subvol?
 
Interesting. So there might be a difference in how Proxmox handles it with ZFS (which I see you use) vs LVM. I just did the same test as you did but I use LVM. New CT, default 5% reserved
Code:
root@pve:~# tune2fs -l /dev/mapper/data--ssd-vm--107--disk--0 | awk -F: '/Block count:/ {gsub(/^[ \t]+/, "", $2); total=$2} /Reserved block count:/ {gsub(/^[ \t]+/, "", $2); res=$2} END {printf "Reserved: %.2f%%\n", (res/total)*100}'
Reserved: 5.00%
And on container
Code:
root@test-ct:~# df -h
Filesystem                              Size  Used Avail Use% Mounted on
/dev/mapper/data--ssd-vm--107--disk--0  7.8G  569M  6.9G   8% /
none                                    492K  4.0K  488K   1% /dev
udev                                     16G     0   16G   0% /dev/tty
tmpfs                                    16G     0   16G   0% /dev/shm
tmpfs                                   6.3G   72K  6.3G   1% /run
tmpfs                                   5.0M     0  5.0M   0% /run/lock
tmpfs                                    16G     0   16G   0% /tmp
root@test-ct:~# dd if=/dev/urandom of=bigfile bs=1M status=progress
7251951616 bytes (7.3 GB, 6.8 GiB) copied, 30 s, 242 MB/s
dd: error writing 'bigfile': No space left on device
6970+0 records in
6969+0 records out
7308279808 bytes (7.3 GB, 6.8 GiB) copied, 30.1765 s, 242 MB/s
root@test-ct:~# df -h
Filesystem                              Size  Used Avail Use% Mounted on
/dev/mapper/data--ssd-vm--107--disk--0  7.8G  7.4G     0 100% /
none                                    492K  4.0K  488K   1% /dev
udev                                     16G     0   16G   0% /dev/tty
tmpfs                                    16G     0   16G   0% /dev/shm
tmpfs                                   6.3G   72K  6.3G   1% /run
tmpfs                                   5.0M     0  5.0M   0% /run/lock
tmpfs
Notice how it's 7.4G of 7.8G used and it's already "100%" use. And now after powering off the container, setting reserved space to 0% and then checking with df again. Same size, same used but 410M avail, 95% used.
Code:
root@test-ct:~# df -h
Filesystem                              Size  Used Avail Use% Mounted on
/dev/mapper/data--ssd-vm--107--disk--0  7.8G  7.4G  410M  95% /
none                                    492K  4.0K  488K   1% /dev
udev                                     16G     0   16G   0% /dev/tty
tmpfs                                    16G     0   16G   0% /dev/shm
tmpfs                                   6.3G   68K  6.3G   1% /run
tmpfs                                   5.0M     0  5.0M   0% /run/lock
tmpfs                                    16G     0   16G   0% /tmp
Could you maybe check with tune2fs what it reports on the subvol?
Actually, a bit of a brainfart. As you use ZFS, you don't use EXT4 at all. So naturally you won't have the issue with reserved space, which isn't a thing in ZFS same as it is in EXT4.
This would only affect any storage system where the underlying FS is EXT4, like in my tests LVM-Thin+EXT4.
 
  • Like
Reactions: leesteken