local disk - what's using up so much space?

NE78

Member
Aug 2, 2022
39
4
13
I have a local disk /dev/sda - it's a 500GB SSD that pve is installed on. I only use it to store ISO and container templates.
I also have /dev/nvme0n1 that's a 1TB nvme that I use to store all of my VMs on and backups.

Proxmox states that I've using 250GB of storage on my local disk but I only have about 22GB of ISO store there.

I ran ncdu / and am not sure what I'm looking at, does this combine for disks?
Bash:
--- / -------------------------------------------------------------------------------------------
  211.4 GiB [##########] /vm                                                                    
   76.0 GiB [###       ] /mnt
.  12.0 GiB [          ] /var
    3.6 GiB [          ] /usr
  402.2 MiB [          ] /boot
   45.7 MiB [          ] /dev
    4.7 MiB [          ] /etc
    1.1 MiB [          ] /run
   76.0 KiB [          ] /root
   40.0 KiB [          ] /tmp
e  16.0 KiB [          ] /lost+found
   16.0 KiB [          ] /data
    8.0 KiB [          ] /home
e   4.0 KiB [          ] /srv
e   4.0 KiB [          ] /opt
e   4.0 KiB [          ] /media
.   0.0   B [          ] /proc
    0.0   B [          ] /sys
@   0.0   B [          ]  libx32
@   0.0   B [          ]  lib64
@   0.0   B [          ]  lib32
@   0.0   B [          ]  sbin
@   0.0   B [          ]  lib
@   0.0   B [          ]  bin


Bash:
--- /mnt/pve/data/images ------------------------------------------------------------------------
/..
32.0 GiB [##########] /106                                                                  
10.8 GiB [###       ] /107
9.2 GiB [##        ] /108
7.7 GiB [##        ] /105
5.4 GiB [#         ] /100
4.4 GiB [#         ] /101
3.0 GiB [          ] /102
1.9 GiB [          ] /103
1.6 GiB [          ] /104
Those don't even exist anymore. I suppose I could clean that up, but what disk is that?
 
Last edited:
Hi,

Those don't even exist anymore. I suppose I could clean that up, but what disk is that?
this seems like the mountpoint for your storage (note the "/mnt/" in the path). In the GUI navigate to the storage, then select "VM Disks" from the sidebar. You should get an overview off all disks (even unused ones there). You can then remove the once you don't need anymore with the button at the top.
 
Under Storage>
--Local (pve)
--pvs (pve) -- this is my pbs system
--storage (pve) -- this is the nvme

Local is only configured to store ISO so there's no VM Disks listed. I did delete those few old VMs from when I did allow installs to go on that drive.

Storage does have all of my qcow2 VM disks. Still doesn't explain where all of that space is being use up though.

Bash:
--- /vm/dump ------------------------------------------------------------------------------------------------------------
                         /..
   29.4 GiB [##########]  vzdump-qemu-100-2022_10_30-01_00_01.vma.zst                                                   
   28.8 GiB [######### ]  vzdump-qemu-100-2022_11_03-16_06_45.vma.zst
   21.0 GiB [#######   ]  vzdump-qemu-128-2022_10_30-01_06_48.vma.zst
    5.3 GiB [#         ]  vzdump-qemu-129-2022_10_30-01_10_24.vma.zst
    3.2 GiB [#         ]  vzdump-qemu-126-2022_10_30-01_05_37.vma.zst
  766.4 MiB [          ]  vzdump-qemu-125-2022_10_30-01_05_05.vma.zst
  315.9 MiB [          ]  vzdump-qemu-123-2022_10_30-01_04_55.vma.zst
  223.3 MiB [          ]  vzdump-qemu-121-2022_10_30-01_04_42.vma.zst
  218.1 MiB [          ]  vzdump-qemu-122-2022_10_30-01_04_47.vma.zst
It looks like this is some of my backups, but, those are only supposed to be on the "storage" storage disk. Is that what /vm/dump is?

I looked in /vm/images/ and VM 100 is showing 100GB of disk space, it's not actually taking up that much, it's what I originally built it out as; I think that's where some of this "used" space reading is coming from. Some of the VM's I gave larger "disks" for expansion. Am I on the right track?
 
Last edited:
I think it might be easier to help you out here if you could post your storage config. Please post the contents of "/etc/pve/storage.cfg". Thanks!
 
Bash:
root@pve:~# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content iso,vztmpl
    shared 0

dir: storage
    path /vm
    content backup,snippets,images,rootdir
    prune-backups keep-all=1
    shared 0

pbs: pbs
    datastore store10t
    server 10.20.20.5
    content backup
    fingerprint xxxx
    prune-backups keep-all=1
    username root@pam
OK- looks like /vm is the storage disk- that answers that. But, why does the GUI show that local is 55% full when NCDU shows 12.1GB for /var?
 
Last edited:
Ok thank you! Could you also post the output of lsblk and df -h. Thanks!
 
Bash:
root@pve:~# lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0          7:0    0     8G  0 loop
loop1          7:1    0    14G  0 loop
sda            8:0    0 476.9G  0 disk
├─sda1         8:1    0  1007K  0 part
├─sda2         8:2    0   512M  0 part /boot/efi
└─sda3         8:3    0 476.4G  0 part
  ├─pve-swap 253:0    0     8G  0 lvm  [SWAP]
  └─pve-root 253:1    0 468.4G  0 lvm  /
nvme0n1      259:0    0 953.9G  0 disk
└─nvme0n1p1  259:1    0 953.9G  0 part /mnt/pve/data
Bash:
root@pve:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   16G     0   16G   0% /dev
tmpfs                 3.2G  1.1M  3.2G   1% /run
/dev/mapper/pve-root  461G  258G  184G  59% /
tmpfs                  16G   46M   16G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
/dev/nvme0n1p1        938G   56K  891G   1% /mnt/pve/data
/dev/sda2             511M  328K  511M   1% /boot/efi
/dev/fuse             128M   32K  128M   1% /etc/pve
tmpfs                 3.2G     0  3.2G   0% /run/user/0
 
Ok so as I see it you might have your storage mis-configured. The following seems to be the case:

  • Your NVMe is mounted on "/mnt/pve/data", but is not configured as a storage in PVE.
  • You have a directory based storage called "storage" that resides on your local disk "/vm".
  • That "storage" is used for VM disks and backups (the "/vm/dump" folder contains the backups).
 
Oh also, you might have used your NVMe previously as a storage, but it does not seem to be configured as a storage backend currently.
 
Is there a way to correct this? I may have gone through a couple of iterations of Proxmox installations on these drives.
 
So you want to use your NVMe for backups and VM images? Instead of the current "/vm" folder? Do you want to keep what is currently stored on the NVMe, just in case?
 
My understanding is, the way I set it up is that nvme was to store my VMs and just a weekly backup that retains just 1 copy. My live VMs are on nvme. If I have to totally format and restore from PBS, I can.
 
My understanding is, the way I set it up is that nvme was to store my VMs and just a weekly backup that retains just 1 copy. My live VMs are on nvme. If I have to totally format and restore from PBS, I can.
Did you ever find a solution?! I have the exact same issue , just not an nvme.
 
Did you ever find a solution?! I have the exact same issue , just not an nvme.
My problem was that when I changed the host name, it retained all of my old backups. When I changed to host names, I re-imported my VMs from PBS. I ended up running a clean install.
 
  • Like
Reactions: tismo