Disk Full - Recovering Space on Root (Proxmox 6.2)

Whitterquick

Active Member
Aug 1, 2020
246
9
38
When trying to patch a kernel an error message always comes up saying root partition is out of space. Is there a way to safely recover space by clearing any unneeded files? Up until now I have always deleted the default lvm and extended the root partition but I’m wondering if there is a more elegant way of doing this without editing the default structure. Thanks.
 
Hi Guys,
I have often come across this forum to solve my problems and you have all the solutions, but in this case I have not found a solution.
I tried everything I found on the forum but I can not solve the problem I turn my logs maybe you can help me.


PVE Manager version pve-manager 6.4-13

Full root partition:
Code:
df -h /

Filesystem      Size  Used Avail Use% Mounted on
/dev/md2         20G   17G  2.0G  90% /


My disks and partitions (RAID1 Software /dev/md2 and /dev/md4):
Code:
fdisk -l

Disk /dev/loop0: 150 GiB, 161061273600 bytes, 314572800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: HGST HUS726040AL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier:

Device          Start        End    Sectors  Size Type
/dev/sda1        2048    1048575    1046528  511M EFI System
/dev/sda2     1048576   42989567   41940992   20G Linux RAID
/dev/sda3    42989568   59371519   16381952  7.8G Linux RAID
/dev/sda4    59371520 7814023167 7754651648  3.6T Linux RAID
/dev/sda5  7814035215 7814037134       1920  960K Linux filesystem


Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: HGST HUS726040AL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier:

Device        Start        End    Sectors  Size Type
/dev/sdb1      2048    1048575    1046528  511M EFI System
/dev/sdb2   1048576   42989567   41940992   20G Linux RAID
/dev/sdb3  42989568   59371519   16381952  7.8G Linux swap
/dev/sdb4  59371520 7814023167 7754651648  3.6T Linux RAID


Disk /dev/md2: 20 GiB, 21473722368 bytes, 41940864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md4: 3 TiB, 3221225472000 bytes, 6291456000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes




Disk /dev/mapper/pve-data: 15.5 GiB, 16672358400 bytes, 32563200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-ct: 3.6 TiB, 3949406978048 bytes, 7713685504 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Code:
lvdisplay

--- Logical volume ---
  LV Path                /dev/pve/data
  LV Name                data
  VG Name                pve
  LV UUID               
  LV Write Access        read/write
  LV Creation host, time rescue.ovh.net, 2020-04-22 13:19:17 +0200
  LV Status              available
  # open                 1
  LV Size                <15.53 GiB
  Current LE             3975
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
  
  --- Logical volume ---
  LV Path                /dev/pve/ct
  LV Name                ct
  VG Name                pve
  LV UUID               
  LV Write Access        read/write
  LV Creation host, time rescue.ovh.net, 2020-04-22 13:19:17 +0200
  LV Status              available
  # open                 1
  LV Size                3.59 TiB
  Current LE             941612
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

Code:
du -hsx /*

0       /bin
101M    /boot
1.3T    /ct
0       /dev
7.3M    /etc
4.0K    /home
0       /initrd.img
0       /lib
0       /lib32
0       /lib64
0       /libx32
16K     /lost+found
4.0K    /media
12K     /mnt
12K     /opt
du: cannot access '/proc/17226/task/17226/fd/4': No such file or directory
du: cannot access '/proc/17226/task/17226/fdinfo/4': No such file or directory
du: cannot access '/proc/17226/fd/3': No such file or directory
du: cannot access '/proc/17226/fdinfo/3': No such file or directory
du: cannot access '/proc/17227': No such file or directory
0       /proc
240K    /root
9.1M    /run
0       /sbin
8.0K    /srv
0       /sys
40K     /tmp
3.2G    /usr
433M    /var
0       /vmlinuz
0       /vmlinuz.old

Having a software raid i am currently resizing disk partitions (but it's a long way).

But since the problem I have on several servers I would not want to use this procedure which is very long.

Any advice is welcome to date I tried to do:

log clean
chache clean
old kernel clean
tmp clean
searched for unnecessary miscellaneous files

Besides you have classic apt commands autoremove, clean, etc. etc.

Hope someone can help me.


Thanks so much.
 
Hi! yes but nothing has changed :( I had already cleaned all the folders by hand:D

Thanks to the immediate response
 
Strange.
Any chance that the /ct folder contains some stuff which has been written to it before it was mounted?
Did this to myselsf a while ago and was desperately trying to find my space-hog.
Unmounted the mountpoint and realized that /data still contained data. It supposed to be empty...
 
i don't think, folder / ct is a partition of the same disk, so i guess it's unlikely it didn't mount at boot.

I state that I understand little :D
 
this is what the script returns me does not find anything :(

Code:
### CLEANING UP LINUX INSTALLATION
Cleaning up >/var/log<
Cleaning up >/lib/modules<
Cleaning up configuration files from removed packages
Reading package lists... Done
Building dependency tree      
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Updating grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.124-1-pve
Found initrd image: /boot/initrd.img-5.4.124-1-pve
Found linux image: /boot/vmlinuz-5.4.119-1-pve
Found initrd image: /boot/initrd.img-5.4.119-1-pve
Found linux image: /boot/vmlinuz-4.19.0-16-cloud-amd64
Found initrd image: /boot/initrd.img-4.19.0-16-cloud-amd64
Adding boot menu entry for EFI firmware configuration
done
Cleaning up apt-cache via apt-get
Reading package lists... Done
Building dependency tree      
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Cleaning up >/etc/apt/cache<
Cleaned up 0 elements.
     Current state:
Filesystem      Size  Used Avail Use% Mounted on
/dev/md2         20G   15G  3.9G  80% /

I have 10 nodes on the proxmox all configured the same and md2 and around 5GB only this is driving me crazy o_Oo_O
 
Did you ever dind the answer?

I am running into the same issue. I have a 128GB ssd zfs 2 way mirror. almost totally full.

Running the cleanup script did not help ;(

Filesystem Size Used Avail Use% Mounted on
rpool/ROOT/pve-1 113G 109G 3.7G 97% /
 
Last edited:
Running the cleanup script did not help ;(
did you use the "-e" option? I have re-worked it so it actually needs that to do anything. Otherwise it will only show what would be done...
in case the script does not help you there is a manual investigation via "du". I am using "du -h --max-depth=1".
 
did you use the "-e" option? I have re-worked it so it actually needs that to do anything. Otherwise it will only show what would be done...
in case the script does not help you there is a manual investigation via "du". I am using "du -h --max-depth=1".
yes, it was well clear in the script to apply the -e argument. but it did not help much. only around 500Mb or so was cleaned
and the digging process also gives me no leads ;(
Code:
du -h --max-depth=1 /
108G    /LTdata
1.2T    /LTData2
512     /media
144M    /var
46M     /dev
0       /sys
2.0K    /mnt
30K     /tmp
512     /srv
512     /opt
512     /home
63K     /root
du: cannot access '/proc/10613/task/10613/fd/3': No such file or directory
du: cannot access '/proc/10613/task/10613/fdinfo/3': No such file or directory
du: cannot access '/proc/10613/fd/4': No such file or directory
du: cannot access '/proc/10613/fdinfo/4': No such file or directory
0       /proc
1.2T    /LTData
2.0K    /rpool
73M     /boot
1.1G    /usr
3.6M    /etc
1.1M    /run
2.5T    /

any suggestions?
 
wow, I am soo dislectlic. thank you for spotting that. that was indeed the problem.

I messed up an rsync operation once and never knew why. now I know I made a typo.

thanks again
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!