/dev/mapper/pve-root has old data on it, how to delete?

demuxer86

New Member
Apr 29, 2023
25
1
1
When I initially installed all of this, I didn't know what I was doing.

As of today, I barely know anything.

But anyway, I deleted my LVM thin that was on the os drive. This allowed me to expand the /dev/mapper/pve-root to 110GB, which is great.

But I still have 32GB taking up space on that volume, and I can't figure out how to delete it. I belive it was an old VM I had on their (stupidly).

Everything else is top notch now. I even have a PBS that actually works. I mean, I took my VM's offline, created new ones from the PBS, and they work!

Code:
root@pve:/dev/mapper# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   16G     0   16G   0% /dev
tmpfs                 3.1G  1.4M  3.1G   1% /run
/dev/mapper/pve-root  109G   32G   73G  31% /
tmpfs                  16G   49M   16G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
/dev/sdd2             511M  336K  511M   1% /boot/efi
/dev/fuse             128M   36K  128M   1% /etc/pve
tmpfs                 3.1G     0  3.1G   0% /run/user/0

How do I get rid of that 32G without hosing anything?
 
You need to find were these 32 GB live.
This can be done via
du -h --max-depth=1
In the root folder. This will spit out the usage of each folder. CD into the folder and repeat the process.
You also can try using the cleanup script I have created. You can find it here:
https://forum.proxmox.com/posts/294457/
 
Try ncdu, it will go over all folders and add their sized, -x limits to one Filesystem, so separate Filesystems like /dev /proc and so on are ignored.
apt install ncdu

ncdu -x /
 
I had tried that, but get an error while trying to install ncdu. Something about not finding the package? I have also done apt-get update.

I'm not home at the moment but I can post logs later.
 
Code:
root@pve:~# apt install ncdu
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  ncdu
0 upgraded, 1 newly installed, 0 to remove and 29 not upgraded.
Need to get 46.9 kB of archives.
After this operation, 111 kB of additional disk space will be used.
Err:1 https://ftp.us.debian.org/debian bullseye/main amd64 ncdu amd64 1.15.1-1
  Reading from proxy failed - read (115: Operation now in progress) [IP: 71.13.17.10 443]
E: Failed to fetch https://ftp.us.debian.org/debian/pool/main/n/ncdu/ncdu_1.15.1-1_amd64.deb  Reading from proxy failed - read (115: Operation now in progress) [IP: 71.13.17.10 443]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
root@pve:~#
 
Workaround, lists the biggest subdirectories and files from the directory you are in.

alias ducks="du -cks * 2>/dev/null | sort -rn |head"
cd /usr
ubu@pop-os:/usr$ ducks
7116600 insgesamt
4297684 lib
2268168 share
267008 bin
154812 src
44632 sbin
35600 include
28752 local
19928 libexec
4 libx32


For your apt problem, are you using a proxy? See: Reading from proxy failed - read (115: Operation now in pr...

Post output of
grep -ris proxy /etc/apt*
 
Search for "debian 11 ncdu". wget the link. Then "dpkg -i ./ncdu-version.deb".

Could the 32GB be swap?
 
I would check the "/var/log", "/var/tmp", "/var/lib/vz", "/mnt" folders and their subfolders.
 
Search for "debian 11 ncdu". wget the link. Then "dpkg -i ./ncdu-version.deb".

Could the 32GB be swap?
This worked. Thank you!

But i don't think it's looking at the right drive... maybe? i don't have time now, but will try harder later

Code:
    2.3 GiB [          ] /var
27.0 GiB [##########] /proxmoxbackups
    2.2 GiB [          ] /usr
   90.6 MiB [          ] /boot
    4.8 MiB [          ] /etc
  156.0 KiB [          ] /root
   40.0 KiB [          ] /tmp
e  16.0 KiB [          ] /lost+found
e   4.0 KiB [          ] /srv
e   4.0 KiB [          ] /opt
e   4.0 KiB [          ] /mnt
e   4.0 KiB [          ] /media1
e   4.0 KiB [          ] /media
e   4.0 KiB [          ] /home
e   4.0 KiB [          ] /cloud2
e   4.0 KiB [          ] /cloud1
@   0.0   B [          ]  libx32
@   0.0   B [          ]  lib64
@   0.0   B [          ]  lib32
@   0.0   B [          ]  sbin
@   0.0   B [          ]  lib
@   0.0   B [          ]  bin
>   0.0   B [          ] /sys
>   0.0   B [          ] /run
>   0.0   B [          ] /proc
>   0.0   B [          ] /dev
 
Wait, actually, I think it is... it looks like I tried making backups on the root drive. This must have been a while ago, like it says last december. I think these are safe to delete. I have a proper PBS setup now and it has all my current backups on there.

Code:
    0.0   B [  0.0%]  vzdump-qemu-102-2022_12_19-02_18_34.vma.zst.protected
    4.0 KiB [  0.0%]  vzdump-qemu-105-2023_01_01-10_04_35.vma.zst.notes
    4.0 KiB [  0.0%]  vzdump-qemu-100-2022_12_11-01_00_01.vma.zst.notes
    4.0 KiB [  0.0%]  vzdump-qemu-100-2023_01_01-01_00_01.vma.zst.notes
    4.0 KiB [  0.0%]  vzdump-qemu-102-2022_12_19-02_18_34.vma.zst.notes
    4.0 KiB [  0.0%]  vzdump-qemu-101-2022_12_09-18_02_29.log
    4.0 KiB [  0.0%]  vzdump-qemu-101-2022_12_09-18_13_36.log
    4.0 KiB [  0.0%]  vzdump-qemu-101-2022_12_09-16_29_52.log
    4.0 KiB [  0.0%]  vzdump-qemu-100-2022_12_11-01_00_01.log
    4.0 KiB [  0.0%]  vzdump-qemu-102-2022_12_19-02_18_34.log
    4.0 KiB [  0.0%]  vzdump-qemu-105-2022_12_25-16_03_40.log
    4.0 KiB [  0.0%]  vzdump-qemu-100-2023_01_01-01_00_01.log
    4.0 KiB [  0.0%]  vzdump-qemu-102-2022_12_25-08_33_39.log
    4.0 KiB [  0.0%]  vzdump-qemu-103-2022_12_25-08_35_08.log
    4.0 KiB [  0.0%]  vzdump-qemu-104-2022_12_25-08_36_44.log
    4.0 KiB [  0.0%]  vzdump-qemu-101-2022_12_25-01_00_36.log
    8.0 KiB [  0.0%]  vzdump-qemu-101-2022_12_11-01_00_29.log
    8.0 KiB [  0.0%]  vzdump-qemu-105-2023_01_01-10_04_35.log
    8.0 KiB [  0.0%]  vzdump-qemu-104-2023_01_01-01_04_57.log
    8.0 KiB [  0.0%]  vzdump-qemu-104-2022_12_19-02_21_10.log
    8.0 KiB [  0.0%]  vzdump-qemu-102-2023_01_01-01_00_53.log
   12.0 KiB [  0.0%]  vzdump-qemu-101-2022_12_18-01_00_35.log
    2.2 GiB [  8.1%]  vzdump-qemu-100-2022_12_11-01_00_01.vma.zst
    2.3 GiB [  8.4%]  vzdump-qemu-100-2023_01_01-01_00_01.vma.zst
    3.3 GiB [ 12.4%]  vzdump-qemu-102-2022_12_19-02_18_34.vma.zst
   19.2 GiB [ 71.1%]  vzdump-qemu-105-2023_01_01-10_04_35.vma.zst
 
Last edited:
Thank you everyone. I deleted everything in that dump folder, and now have a clean PVE drive, without any issues.

Just wanted to update to end the thread. Again, thanks everyone.
 
Update,

Now my main storage nvme has too many old files. It's a 1TB drive and has nowhere near that allocated to it, but apparently it shows full.

How do I find what mapping to use to reach it through ncdu -x ? Every directory I try it says it's not a directory.
 
Show Output of lsblk and df -h.
you use LVM so also vgs and lvs
 
Code:
nvme0n1                         259:0    0 953.9G  0 disk

main1tbintel-vm--107--disk--0 253:1    0    32G  0 lvm

├─main1tbintel-vm--109--disk--0 253:2    0    32G  0 lvm

├─main1tbintel-vm--101--disk--0 253:3    0    32G  0 lvm

├─main1tbintel-vm--108--disk--0 253:4    0    96G  0 lvm

├─main1tbintel-vm--108--disk--1 253:5    0    32G  0 lvm

├─main1tbintel-vm--100--disk--0 253:6    0    96G  0 lvm

├─main1tbintel-vm--100--disk--1 253:7    0    96G  0 lvm

├─main1tbintel-vm--102--disk--2 253:8    0    44G  0 lvm

├─main1tbintel-vm--104--disk--1 253:9    0    65G  0 lvm

├─main1tbintel-vm--104--disk--0 253:10   0     4M  0 lvm

└─main1tbintel-vm--104--disk--2 253:11   0     4M  0 lvm
 
Last edited:
Code:
root@pve:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   32G     0   32G   0% /dev
tmpfs                 6.3G  1.4M  6.3G   1% /run
/dev/mapper/pve-root  109G   15G   89G  15% /
tmpfs                  32G   46M   32G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
/dev/nvme1n1p2        511M  336K  511M   1% /boot/efi
/dev/fuse             128M   40K  128M   1% /etc/pve
tmpfs                 6.3G     0  6.3G   0% /run/user/0
 
Code:
root@pve:~# vgs
  VG           #PV #LV #SN Attr   VSize    VFree   
  main1tbintel   1  13   0 wz--n- <953.87g <300.86g
  pve            1   2   0 wz--n- <118.74g       0
root@pve:~#

But vgs says 300GB free
 
So you have two volume groups, pve and main1tbintel. Your root lv is on pve and there is no unallocated (free) space on the pve volume group.
So your root lv is about 110 GB and it has about 15 GB data (15%) full.

15 GB seems about right for a proxmox install with logs and templates and ist images.

ncdu -x / will show you the details.

Maybe you can delete some old logfiles

pvdisplay will show you the physical volumes,
vgdisplay the volume groups and
lvdisplay the logical volumes.

The maintbintel vg contains your virtual disks
.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!