local disk - what's using up so much space? MKII

Fidelita

New Member
Jul 22, 2025
2
0
1
I am a bit lost in how I configured this all originally, new to Proxmox relatively. I deleted a container that had a large footprint but seemed to have gained nothing back?

Results of cat /etc/pve/storage.cfg

Code:
root@Plex:~# cat /etc/pve/storage.cfg

dir: local

        path /var/lib/vz

        content backup,iso,vztmpl


lvmthin: local-lvm

        thinpool data

        vgname pve

        content rootdir,images


lvm: VM-Storage

        vgname VM-Storage

        content rootdir,images

        nodes Plex,Plex-2

        saferemove 0

        shared 0


zfspool: ZFS-1

        pool ZFS-1

        content rootdir,images

        mountpoint /ZFS-1

        nodes Plex


Results of lsblk

Code:
root@Plex:~# lsblk

NAME                           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS

sda                              8:0    0 111.8G  0 disk

├─VM--Storage-vm--101--disk--0 252:2    0    75G  0 lvm 

├─VM--Storage-vm--102--disk--0 252:3    0     4G  0 lvm 

├─VM--Storage-vm--105--disk--0 252:4    0    10G  0 lvm 

├─VM--Storage-vm--111--disk--0 252:5    0     4G  0 lvm 

├─VM--Storage-vm--113--disk--0 252:6    0     4G  0 lvm 

├─VM--Storage-vm--118--disk--0 252:7    0     4G  0 lvm 

├─VM--Storage-vm--119--disk--0 252:8    0     2G  0 lvm 

└─VM--Storage-vm--120--disk--0 252:9    0     4G  0 lvm 

sdb                              8:16   0 111.8G  0 disk

├─sdb1                           8:17   0  1007K  0 part

├─sdb2                           8:18   0     1G  0 part /boot/efi

└─sdb3                           8:19   0 110.8G  0 part

  ├─pve-swap                   252:0    0     8G  0 lvm  [SWAP]

  ├─pve-root                   252:1    0  37.7G  0 lvm  /

  ├─pve-data_tmeta             252:10   0     1G  0 lvm 

  │ └─pve-data-tpool           252:12   0  49.3G  0 lvm 

  │   ├─pve-data               252:13   0  49.3G  1 lvm 

  │   ├─pve-vm--100--disk--0   252:14   0    45G  0 lvm 

  │   └─pve-vm--300--disk--0   252:15   0     8G  0 lvm 

  └─pve-data_tdata             252:11   0  49.3G  0 lvm 

    └─pve-data-tpool           252:12   0  49.3G  0 lvm 

      ├─pve-data               252:13   0  49.3G  1 lvm 

      ├─pve-vm--100--disk--0   252:14   0    45G  0 lvm 

      └─pve-vm--300--disk--0   252:15   0     8G  0 lvm 

sdc                              8:32   0 476.9G  0 disk

├─sdc1                           8:33   0 476.9G  0 part

└─sdc9                           8:41   0     8M  0 part

zd0                            230:0    0    32G  0 disk

├─zd0p1                        230:1    0    31G  0 part

├─zd0p2                        230:2    0     1K  0 part

└─zd0p5                        230:5    0   975M  0 part



Results of df -h

Code:
root@Plex:~# df -h

Filesystem               Size  Used Avail Use% Mounted on

udev                      16G     0   16G   0% /dev

tmpfs                    3.2G  2.1M  3.2G   1% /run

/dev/mapper/pve-root      37G   19G   17G  52% /

tmpfs                     16G   72M   16G   1% /dev/shm

tmpfs                    5.0M     0  5.0M   0% /run/lock

/dev/sdb2               1022M   12M 1011M   2% /boot/efi

ZFS-1                    226G  128K  226G   1% /ZFS-1

ZFS-1/subvol-103-disk-0  4.0G  1.5G  2.6G  36% /ZFS-1/subvol-103-disk-0

ZFS-1/subvol-110-disk-0   41G   28G   14G  69% /ZFS-1/subvol-110-disk-0

ZFS-1/subvol-104-disk-0  5.0G  1.6G  3.5G  32% /ZFS-1/subvol-104-disk-0

ZFS-1/subvol-103-disk-1  247G  164G   84G  67% /ZFS-1/subvol-103-disk-1

ZFS-1/subvol-100-disk-0  235G  9.2G  226G   4% /ZFS-1/subvol-100-disk-0

/dev/fuse                128M   68K  128M   1% /etc/pve

//192.168.1.17/4TB       3.6T  2.8T  806G  79% /mnt/lxc_shares/Plex/4TB

//192.168.1.17/4TB-2     3.6T  3.1T  575G  85% /mnt/lxc_shares/Plex/4TB-2

//192.168.1.17/4TB-3     3.6T  2.8T  850G  77% /mnt/lxc_shares/Plex/4TB-3

//192.168.1.17/4TB-4     3.6T  2.6T  1.1T  73% /mnt/lxc_shares/Plex/4TB-4

//192.168.1.17/4TB-5     3.6T  2.6T 1008G  73% /mnt/lxc_shares/Plex/4TB-5

tmpfs                    3.2G     0  3.2G   0% /run/user/0
 
Last edited:
Please edit your post to use code blocks. Also share lvs -a. To find used storage on local try this
Bash:
apt install gdu
gdu /
 
Last edited:
Results of lvs -a

Code:
root@Plex:~# lvs -a
  LV              VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-101-disk-0   VM-Storage -wi-ao----  75.00g                                                  
  vm-102-disk-0   VM-Storage -wi-ao----   4.00g                                                  
  vm-105-disk-0   VM-Storage -wi-ao----  10.00g                                                  
  vm-111-disk-0   VM-Storage -wi-a-----   4.00g                                                  
  vm-113-disk-0   VM-Storage -wi-ao----   4.00g                                                  
  vm-118-disk-0   VM-Storage -wi-a-----   4.00g                                                  
  vm-119-disk-0   VM-Storage -wi-a-----   2.00g                                                  
  vm-120-disk-0   VM-Storage -wi-a-----   4.00g                                                  
  data            pve        twi-aotz-- <49.34g             60.90  2.71                          
  [data_tdata]    pve        Twi-ao---- <49.34g                                                  
  [data_tmeta]    pve        ewi-ao----   1.00g                                                  
  [lvol0_pmspare] pve        ewi-------   1.00g                                                  
  root            pve        -wi-ao---- <37.70g                                                  
  swap            pve        -wi-ao----   8.00g                                                  
  vm-100-disk-0   pve        Vwi-aotz--  45.00g data        62.77                                
  vm-300-disk-0   pve        Vwi-a-tz--   8.00g data        22.52
 
Did you find anything interesting with gdu? Can you tell me which storage your CT used to use and where you check the free space and what you see?
Since the formatting in the first post is still pretty bad maybe you can share all of these again too just so I have all the context I need
Bash:
df -hT
lsblk -o+FSTYPE
cat /etc/pve/storage.cfg
zfs list -rt all -o name,used,avail,refer,mountpoint,refquota,refreservation
local-lvm should have about 40% free space. Make sure to set up discard properly for thin-provisioned storage like that.
VM-Storage seems to use pure LVM rather than LVM-Thin. I'd move the disks on it to somewhere else and re-create that if possible.
There's also what looks to be a ZFS storage but I can't tell what you use it for and how.
 
Last edited:
If you want to drill down into the filesystem finding the biggest files and consumers, you can also use a recursively called
Code:
du -h -d 1 <path>
.

du works like dh, but just for files and not for partitions.
Code:
-h
activates human readable size units,
Code:
-d 1
just gives you the information for the requested paths with the depth of 1.