/dev/mapper/pve-root disk usage 100%

natharas

New Member
Sep 4, 2022
20
0
1
Hi, I'm having some issues with /dev/mapper/pve-root and have found that is at 100% use. I've checked the local storage, it contains no ISO's, templates but a few backups though less than 1mb per backup, ISO are saved to an NFS share, below is the output of df- h
Filesystem Size Used Avail Use% Mounted on udev 46G 0 46G 0% /dev tmpfs 9.1G 1.6M 9.1G 1% /run /dev/mapper/pve-root 6.8G 6.5G 0 100% / tmpfs 46G 52M 46G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/sdi2 511M 340K 511M 1% /boot/efi /dev/fuse 128M 20K 128M 1% /etc/pve 192.168.1.153:/mnt/vmpool/VM 539G 17G 522G 4% /mnt/pve/VMStorage //192.168.1.153/VM 539G 17G 522G 4% /mnt/pve/VM //192.168.1.153/Media 7.0T 256K 7.0T 1% /mnt/pve/media tmpfs 9.1G 0 9.1G 0% /run/user/0

From here, how do I determine what is causing the use to be 100%?
 
A quick search can be done via du -sh * and will give you the sizes of every directory. From there you can investigate further.
There are of course tools for that (baobab for example), but installing stuff might be a problem for you in this situation.
 
  • Like
Reactions: natharas
6.8GB is also very little space. I would use 16GB or better 32GB. Keep in mind that logs are growing over time and can eat some GBs of storage.
And sometimes uploading ISOs fails and failed uploads will remain in /var/tmp. Manually created directory storages pointing to external storages or network shares are also often causing the root filesystem to fill up in case you forget to enable "is_mountpoint" for that storage when the mounting fails.
 
Last edited:
  • Like
Reactions: natharas
Thanks guys I was able to work it out, it was a pveupload and once I deleted that I was okay.

I'm going to grab a 256gb SSD, what is the best process to migrate from the USB without rebuilding the environment?
 
You shouldn't run PVE from USB pen drive or SD card. It will write too much and might kill it very fast. And don't buy a crappy SSD. With SSDs you get what you pay for. Enterprise SSDs are recommended but when choosing a consumer SSD atleast don't get one with QLC NAND.

And there is no easy way to migrate it as PVE is no appliance but a full fledged Linux OS with all its complexity and flexibility. The more you optimize/personalize your PVe, the harder it gets to migrate or set it up again. Host backups are on the roadmap but who knows ow long that will take.

Some option:
1.) clone your old partitions from your old disk to the new one. Manually edit your partition table, extend your partitions+PV+VG+LVs, grow your filesystems.
2.) backup all guests and the /etc folder. Install PVE from scratch, compare all config files in /etc/pve on the new installation line by line with the config files of the backup and edit line that you want to keep from your old installation
3.) backup all guests, the /etc folder and /var/lib/pve-cluster/config.db. Install PVE from scratch, then copy over the /var/lib/pve-cluster/config.db from the backup like described here: https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)#_recovery
You will most lekely also have to edit some other config files like /etc/network/interfaces and so on.


So best is to do it right in the first place and then back the whole system disk up (for example using clonezilla) on a regular basis. So you never have to set it up again as you can restore it from backup in case the disk or upgrade fails.
 
Last edited:
2.) backup all guests and the /etc folder. Install PVE from scratch, compare all config files in /etc/pve on the new installation line by line with the config files of the backup and edit line that you want to keep from your old installation
3.) backup all guests, the /etc folder and /var/lib/pve-cluster/config.db. Install PVE from scratch, then copy over the /var/lib/pve-cluster/config.db from the backup like described here: https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)#_recovery
You will most lekely also have to edit some other config files like /etc/network/interfaces and so on.


So best is to do it right in the first place and then back the whole system disk up (for example using clonezilla) on a regular basis. So you never have to set it up again as you can restore it from backup in case the disk or upgrade fails.
Sounds like option 3 will be best for me, I've got 3 VM's on my current Proxmox environment and they're using a NFS share for their storage. So, I'd be able to potentially do a clean install of Proxmox, copy across the folders you've listed to the new instance and connect the NFS share. Then I'd be able to set the network interface as well and it should then be okay.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!