Out of space on /dev/mapper/pve-root

psyko_chewbacca

New Member
Jun 22, 2023
6
1
3
Hi,

I cannot figure out why and I'm not a Proxmox pro here. But I'm out of disk space on my host SSD.
I have all my VMs stored on a secondary SSD so basically, all space on that boot SSD (500GB) could be available.

Code:
root@pve-server:/# df -h
Filesystem                  Size  Used Avail Use% Mounted on
udev                         16G     0   16G   0% /dev
tmpfs                       3.2G  1.2M  3.2G   1% /run
/dev/mapper/pve-root         94G   90G     0 100% /
tmpfs                        16G   60M   16G   1% /dev/shm
tmpfs                       5.0M     0  5.0M   0% /run/lock
/dev/nvme0n1p2             1022M  344K 1022M   1% /boot/efi
/dev/fuse                   128M   40K  128M   1% /etc/pve
192.168.10.254:/4TB1        3.6T  2.8T  678G  81% /mnt/pve/OMV_4TB1
192.168.10.254:/Parity1     3.7T  2.1T  1.5T  59% /mnt/pve/OMV_Parity1
192.168.10.254:/mergerfs01   76T   35T   38T  48% /mnt/pve/OMV_mergerfs01
192.168.10.254:/Parity2     3.6T  132G  3.3T   4% /mnt/pve/OMV_Parity2
tmpfs                       3.2G     0  3.2G   0% /run/user/0

So 100% usage on /dev/mapper/pvr-root

Code:
root@pve-server:/# du -hsx
7.5G    .
Yet reported used space is modest...
/var/log isn't that big.

Finally lsblk reports a vm disk on that SSD I am not sure is in use... (EDIT: It looks like a EFI partition for a VM, any way to move it to the secondary SSD?)
Code:
root@pve-server:/# lsblk
NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
nvme0n1                       259:0    0 465.8G  0 disk
├─nvme0n1p1                   259:1    0  1007K  0 part
├─nvme0n1p2                   259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                   259:3    0 464.8G  0 part
  ├─pve-swap                  253:2    0    32G  0 lvm  [SWAP]
  ├─pve-root                  253:3    0    96G  0 lvm  /
  ├─pve-data_tmeta            253:4    0   3.2G  0 lvm
  │ └─pve-data-tpool          253:6    0 314.3G  0 lvm
  │   ├─pve-data              253:7    0 314.3G  1 lvm
  │   └─pve-vm--200--disk--2  253:8    0     4M  0 lvm
  └─pve-data_tdata            253:5    0 314.3G  0 lvm
    └─pve-data-tpool          253:6    0 314.3G  0 lvm
      ├─pve-data              253:7    0 314.3G  1 lvm
      └─pve-vm--200--disk--2  253:8    0     4M  0 lvm
nvme1n1                       259:4    0 931.5G  0 disk
├─ssd1tb-ssd1tb_tmeta         253:0    0   9.3G  0 lvm
│ └─ssd1tb-ssd1tb-tpool       253:9    0 912.8G  0 lvm
│   ├─ssd1tb-ssd1tb           253:10   0 912.8G  1 lvm
│   ├─ssd1tb-vm--201--disk--0 253:11   0   100G  0 lvm
│   ├─ssd1tb-vm--200--disk--0 253:12   0   128G  0 lvm
│   ├─ssd1tb-vm--205--disk--0 253:13   0   100G  0 lvm
│   ├─ssd1tb-vm--204--disk--0 253:14   0    20G  0 lvm
│   ├─ssd1tb-vm--203--disk--0 253:15   0    15G  0 lvm
│   ├─ssd1tb-vm--202--disk--0 253:16   0     4G  0 lvm
│   ├─ssd1tb-vm--210--disk--0 253:17   0    20G  0 lvm
│   ├─ssd1tb-vm--207--disk--0 253:18   0     3G  0 lvm
│   ├─ssd1tb-vm--208--disk--0 253:19   0    50G  0 lvm
│   └─ssd1tb-vm--206--disk--0 253:20   0    16G  0 lvm
└─ssd1tb-ssd1tb_tdata         253:1    0 912.8G  0 lvm
  └─ssd1tb-ssd1tb-tpool       253:9    0 912.8G  0 lvm
    ├─ssd1tb-ssd1tb           253:10   0 912.8G  1 lvm
    ├─ssd1tb-vm--201--disk--0 253:11   0   100G  0 lvm
    ├─ssd1tb-vm--200--disk--0 253:12   0   128G  0 lvm
    ├─ssd1tb-vm--205--disk--0 253:13   0   100G  0 lvm
    ├─ssd1tb-vm--204--disk--0 253:14   0    20G  0 lvm
    ├─ssd1tb-vm--203--disk--0 253:15   0    15G  0 lvm
    ├─ssd1tb-vm--202--disk--0 253:16   0     4G  0 lvm
    ├─ssd1tb-vm--210--disk--0 253:17   0    20G  0 lvm
    ├─ssd1tb-vm--207--disk--0 253:18   0     3G  0 lvm
    ├─ssd1tb-vm--208--disk--0 253:19   0    50G  0 lvm
    └─ssd1tb-vm--206--disk--0 253:20   0    16G  0 lvm


pveversion for good mesure

Code:
root@pve-server:/# pveversion
pve-manager/8.0.4/d258a813cfa6b390 (running kernel: 6.2.16-14-pve)


So why am I running out of space when there is seemingly 94GB of space allocated for root partition while only 7.5GB is in use?


Is there a way to fix this without reinstalling Proxmox? I kinda want to avoid as much downtime as possible.

Thanks
 
Last edited:
Maybe some of the large files are covered up with a mount which hides the files from du.
You can temporarily mount / somewhere and run du over there again.
Code:
mount -o bind / /mnt/root
du -sh /mnt/root
This might show you a size closer to the 90GB.

Another reason for such a full storage might be a high number of inodes. To confirm that check the IFree column in the output of df -i
 
Finally lsblk reports a vm disk on that SSD I am not sure is in use... (EDIT: It looks like a EFI partition for a VM, any way to move it to the secondary SSD?)
It is an EFI disk that stores settings like the boot order which is not an EFI partition. Anyways, to move the disk to a different storage select the VM that the EFI disk belongs to in the web interface. Then under Hardware select the EFI Disk and at the top click "Disk Action" --> "Move Storage".
Alternatively this can be done from the CLI aswell:
Code:
qm move-disk <vmid> efidisk0 [<storage>]
 
Thanks and thanks @fschauer ! MVP right here.
You were spot on on the issue.

I did have some issue with stale handle on mounted NFS share in the recent past. I guess some VMs continued writing to them and it just dumped the data onto the SSD.
Is there a way to prevent this type of issue in the future? Detect stale NFS handle and try to remount them or at least prevent writing.
These are NFSv4 shares coming from a VM that gets launched first.
 
Thanks and thanks @fschauer ! MVP right here.
You were spot on on the issue.

I did have some issue with stale handle on mounted NFS share in the recent past. I guess some VMs continued writing to them and it just dumped the data onto the SSD.
Is there a way to prevent this type of issue in the future? Detect stale NFS handle and try to remount them or at least prevent writing.
These are NFSv4 shares coming from a VM that gets launched first.
In case you mount the NFS share outside of PVE (fstab/autofs/systemd/...) and then use it in PVE as a directory storage you should set the "is_mountpoint" option so writes to that directory storage will fail when the NFS share isn't mounted instead of filling up your root filesystem. Most people don't do this because it needs to be done using the CLI.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!