PVE-root is full

milehighmox

New Member
Feb 16, 2026
4
0
1
As the title says, the root pve volume is full. It is not the the data store. I have searched and tried several solutions, none of which worked. I have also tried to understand how the root volume is laid out in order to figure out what is eating up the space, no luck.

Need help. Would be more than happy to post step by step solution once this is solved since I'm not the first and will not be the last to deal with this issue.

Thanks in advance
 
please present facts and only facts.
What did you do with you Proxmox VE?
Where sit all the Backups from your system?
did you run gdu / ? And check the big player dirs and files on you system.
 
lsblk
Code:
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0 465.8G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0     1G  0 part
└─sda3               8:3    0 464.8G  0 part
  ├─pve-swap       252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta 252:2    0   3.4G  0 lvm
  │ └─pve-data     252:4    0 337.9G  0 lvm
  └─pve-data_tdata 252:3    0 337.9G  0 lvm
    └─pve-data     252:4    0 337.9G  0 lvm
sdb                  8:16   0   7.3T  0 disk
├─sdb1               8:17   0   7.3T  0 part
└─sdb9               8:25   0     8M  0 part
sdc                  8:32   0   7.3T  0 disk
├─sdc1               8:33   0   7.3T  0 part
└─sdc9               8:41   0     8M  0 part
zd0                230:0    0   1.2T  0 disk
├─zd0p1            230:1    0    31G  0 part
├─zd0p2            230:2    0     1K  0 part
└─zd0p5            230:5    0   975M  0 part
zd16               230:16   0    85G  0 disk
├─zd16p1           230:17   0    84G  0 part
├─zd16p2           230:18   0     1K  0 part
└─zd16p5           230:21   0   975M  0 part
zd32               230:32   0    32G  0 disk
├─zd32p1           230:33   0    31G  0 part
├─zd32p2           230:34   0     1K  0 part
└─zd32p5           230:37   0   975M  0 part
zd48               230:48   0  12.5G  0 disk
zd64               230:64   0    81G  0 disk
├─zd64p1           230:65   0   976M  0 part
├─zd64p2           230:66   0     1K  0 part
└─zd64p5           230:69   0    80G  0 part
zd80               230:80   0  32.6G  0 disk
zd96               230:96   0    32G  0 disk
├─zd96p1           230:97   0    31G  0 part
├─zd96p2           230:98   0     1K  0 part
└─zd96p5           230:101  0   975M  0 part
zd112              230:112  0  32.6G  0 disk
zd128              230:128  0  32.6G  0 disk
zd144              230:144  0    62G  0 disk
├─zd144p1          230:145  0    61G  0 part
├─zd144p2          230:146  0     1K  0 part
└─zd144p5          230:149  0   976M  0 part
zd160              230:160  0  16.5G  0 disk
zd176              230:176  0  12.5G  0 disk
zd192              230:192  0  32.6G  0 disk
zd208              230:208  0   126G  0 disk
├─zd208p1          230:209  0   125G  0 part
├─zd208p2          230:210  0     1K  0 part
└─zd208p5          230:213  0   975M  0 part
zd224              230:224  0   1.2T  0 disk
├─zd224p1          230:225  0    79G  0 part
├─zd224p2          230:226  0     1K  0 part
└─zd224p5          230:229  0   975M  0 part
zd240              230:240  0   8.7G  0 disk
zd256              230:256  0   261G  0 disk
├─zd256p1          230:257  0   512K  0 part
├─zd256p2          230:258  0   257G  0 part
└─zd256p3          230:259  0     4G  0 part
zd272              230:272  0  13.2G  0 disk
zd288              230:288  0    41G  0 disk
├─zd288p1          230:289  0   512K  0 part
├─zd288p2          230:290  0    39G  0 part
└─zd288p3          230:291  0     2G  0 part
zd304              230:304  0    82G  0 disk
├─zd304p1          230:305  0    81G  0 part
├─zd304p2          230:306  0     1K  0 part
└─zd304p5          230:309  0   975M  0 part
zd320              230:320  0    46G  0 disk
├─zd320p1          230:321  0    45G  0 part
├─zd320p2          230:322  0     1K  0 part
└─zd320p5          230:325  0   975M  0 part
zd336              230:336  0   8.5G  0 disk
zd352              230:352  0    50G  0 disk
├─zd352p1          230:353  0   976M  0 part
├─zd352p2          230:354  0     1K  0 part
└─zd352p5          230:357  0    49G  0 part
zd368              230:368  0    82G  0 disk
├─zd368p1          230:369  0    81G  0 part
├─zd368p2          230:370  0     1K  0 part
└─zd368p5          230:373  0   975M  0 part

df-hT
Code:
Filesystem                        Type      Size  Used Avail Use% Mounted on
udev                              devtmpfs   63G     0   63G   0% /dev
tmpfs                             tmpfs      13G  5.8M   13G   1% /run
/dev/mapper/pve-root              ext4       94G   94G     0 100% /
tmpfs                             tmpfs      63G   58M   63G   1% /dev/shm
tmpfs                             tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs                             tmpfs     1.0M     0  1.0M   0% /run/credentials/systemd-journald.service
tmpfs                             tmpfs      63G     0   63G   0% /tmp
mybiz                             zfs       2.3T  128K  2.3T   1% /mybiz
mybiz/iso                         zfs       2.3T  3.7G  2.3T   1% /mybiz/iso
mybiz/vm                          zfs       2.3T  128K  2.3T   1% /mybiz/vm
mybiz/learning_vm                 zfs       2.3T  128K  2.3T   1% /mybiz/learning_vm
/dev/fuse                         fuse      128M   32K  128M   1% /etc/pve
tmpfs                             tmpfs     1.0M     0  1.0M   0% /run/credentials/getty@tty1.service
10.xx.xx.xx:/proxstorage/vmbackup nfs4       11T  152G   11T   2% /mnt/pve/nfsbackup
tmpfs                             tmpfs      13G  8.0K   13G   1% /run/user/0
Code:
# du -hcs /*
0    /bin
392M    /boot
58M    /dev
5.9M    /etc
1.2M    /home
0    /lib
0    /lib64
16K    /lost+found
28K    /media
224G    /mnt
3.7G    /mybiz
4.0K    /opt
du: cannot access '/proc/2095113/task/2095113/fd/4': No such file or directory
du: cannot access '/proc/2095113/task/2095113/fdinfo/4': No such file or directory
du: cannot access '/proc/2095113/fd/3': No such file or directory
du: cannot access '/proc/2095113/fdinfo/3': No such file or directory
0    /proc
72K    /root
5.8M    /run
0    /sbin
4.0K    /srv
0    /sys
0    /tmp
6.1G    /usr
12G    /var
245G    total
 
please present facts and only facts.
What did you do with you Proxmox VE?
Where sit all the Backups from your system?
did you run gdu / ? And check the big player dirs and files on you system.
I use it to host my email, nextcloud.
Backups are stored on external usb and nfs share
No room left, so installing gdu isn't an option
 
I guess there are some (I mean much) data in some directory in which NOW some other filesystem is mounted.
In your case in /mybiz or /mnt

Can you temporaily umount all of them and then check what is inside (and how much)?
 
  • Like
Reactions: leesteken
My above posted command (du -h -x -d1 /) with the -x option will skip directories on different file systems, as shown here.
So no need to unmount anything.
It will skip the mounted paths and it will not show the space occupied by the directory in pre-mounted state. One can make some inference regarding the space usage but the safest method is to unmount the external device and check again.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
One can make some inference regarding the space usage but the safest method is to unmount the external device and check again.
While I agree, that unmounting will give a more robust scene of what we are looking at, I usually try to initially avoid unmounting, as some of these mounts may be system-critical. I anyway suppose, that in this OP's case, using USB & NFS, we are almost certainly dealing with locally saved /mnt files.
Eventually, at the clean-up stage, he will possibly need to unmount anyway.
 
I seem to remember that one can observe files "covered" by the mount, by means of "bind" mount. But I'd rather not take on remotely instructing not experienced admin :cool:
 
More likely to be backups/isos/templates in /var/lib/vz.
 
Last edited:
I've seen PVE installations with 10G+ log files.
But after cleaning up the logs, he can definitely install gdu.
 
There is no limit when installing with PVE Iso 9.0.
I always change this (Mod-ET Block) first after installation.

Code:
[Journal]
# Mod-ET
#Storage=volatile               #Für log2RAM notwendig        
SystemMaxUse=64M
MaxLevelStore=notice
MaxLevelSyslog=notice
#
#Storage=auto
#Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitIntervalSec=30s
#RateLimitBurst=10000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
 
Last edited:
SystemMaxUse= and RuntimeMaxUse= control how much disk space the journal may use up at most. SystemKeepFree= and RuntimeKeepFree= control how much disk space systemd-journald shall leave free for other uses. systemd-journald will respect both limits and use the smaller of the two values.
The first pair defaults to 10% and the second to 15% of the size of the respective file system, but each of the calculated default values is capped to 4G
https://www.freedesktop.org/software/systemd/man/latest/journald.conf.html#SystemMaxUse=
 
  • Like
Reactions: Ernst T.