container shows wrong disk usage?

limone

Well-Known Member
Aug 1, 2017
89
8
48
29
Hi,

this is how my df -h looks like:

root@test:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pve-vm--200--disk--1 98G 58G 36G 62% /

This container got like no data on it, how is it possible that 58GBs are "used"?
Is this a bug in the filesystem?

Also, is there a way to read real disk usage on the host node?
df -h doesn't seem to work here
i've got a 320gb hdd, but df -h just shows me this:
/dev/mapper/pve-root 77G 11G 62G 15% /
and a few little other things, but nothing over 3gb disk size

Edit: ok i found "pvs" and "lvs"

pvs shows this:
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 311.75g 15.79g

Did I do anything wrong? Theres no way I use nearly 300GB disk space..
 
Last edited:
'pvs' are the 'physical volumes' for lvm
'vgs' are the 'volume groups' -> their size is fixed (no thin provisioning)

what you look for is the output of
'lvs' those are the 'logical volumes' -> they can be thick or thin provisioned

as for the usage in the container:

a
Code:
du -sh /*

can probably show what uses the disk
also for a more convenient tool for this look at 'ncdu'
 
Code:
root@test:~# du -sh /*
7.9M    /bin
4.0K    /boot
0       /dev
4.3M    /etc
36K     /home
38M     /lib
4.0K    /lib64
16K     /lost+found
4.0K    /media
4.0K    /mnt
884K    /opt
0       /proc
109M    /root
56K     /run
5.2M    /sbin
4.0K    /srv
0       /sys
12K     /tmp
799M    /usr
362M    /var

ncdu shows the same
Code:
  798.7MiB [##########] /usr
  361.5MiB [####      ] /var
  108.0MiB [#         ] /root
   38.0MiB [          ] /lib
    7.8MiB [          ] /bin
    5.2MiB [          ] /sbin
    4.3MiB [          ] /etc
  884.0KiB [          ] /opt
   56.0KiB [          ] /run
   36.0KiB [          ] /home
e  16.0KiB [          ] /lost+found
   12.0KiB [          ] /tmp
    4.0KiB [          ] /lib64
e   4.0KiB [          ] /srv
e   4.0KiB [          ] /mnt
e   4.0KiB [          ] /media
e   4.0KiB [          ] /boot
.   0.0  B [          ] /proc
.   0.0  B [          ] /sys
    0.0  B [          ] /dev
This is definitely not 58GB in total :D

output of lvs:
Code:
LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- 213.00g             92.27  43.58
  root          pve -wi-ao----  77.75g
  swap          pve -wi-ao----   5.00g
  vm-100-disk-1 pve Vwi-aotz-- 100.00g data        99.94
  vm-102-disk-1 pve Vwi-aotz-- 200.00g data        5.08
  vm-103-disk-1 pve Vwi-aotz--  50.00g data        66.84
  vm-104-disk-2 pve Vwi-aotz-- 100.00g data        20.67
  vm-105-disk-1 pve Vwi-aotz--  20.00g data        13.27
  vm-106-disk-1 pve Vwi-aotz--  50.00g data        8.74
  vm-109-disk-1 pve Vwi-aotz-- 100.00g data        10.49
  vm-149-disk-1 pve Vwi-aotz--  10.00g data        23.77
  vm-150-disk-1 pve Vwi-aotz--  10.00g data        21.64
  vm-200-disk-1 pve Vwi-aotz-- 100.00g data        5.09
  vm-220-disk-1 pve Vwi-aotz--   5.00g data        25.16
  vm-221-disk-1 pve Vwi-aotz--   5.00g data        26.79
  vm-222-disk-1 pve Vwi-aotz--   5.00g data        24.86
  vm-254-disk-1 pve Vwi-aotz--   3.00g data        45.77

how to see if its thin or thick?
 
can you post your container config?
Code:
pct config 200
 
Code:
arch: amd64
cores: 2
cpuunits: 1000
hostname: test
memory: 2048
net0: name=eth0,bridge=vmbr2,gw=123.123.123.123,hwaddr=2A:F7:80:DD:F7:A6,ip=123.123.123.200/24,type=veth
ostype: debian
rootfs: local-lvm:vm-200-disk-1,size=100G
searchdomain: 8.8.8.8
swap: 0
 
mhmm, do you have anything mounted inside the container? (so maybe you overmounted a directory?)
 
I dont think so, as i dont have anything i /etc/fstab and the disk space is still used after a reboot, I really dont think so.
any way to check this?

heres a screenshot of my lvm
jVEkS3u.png


I dont use 200GB by far, i'd guess something like 100 GB should be the real value.

Edit: even 100GB is high, more like 60-80GB
 
Last edited:
I dont think so, as i dont have anything i /etc/fstab and the disk space is still used after a reboot, I really dont think so.
any way to check this?
what does 'mount' say in the container?

I dont use 200GB by far, i'd guess something like 100 GB should be the real value.
blocks are used when they are written to, you only get them back if you trim inside the container/vm (in case of the vm the 'discard' option must be set on the disk and it must be on the virtio-scsi controller)
 
what does 'mount' say in the container?
Code:
root@test:~# mount
/dev/mapper/pve-vm--200--disk--1 on / type ext4 (rw,relatime,stripe=32,data=ordered)
none on /dev type tmpfs (rw,relatime,size=492k,mode=755)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys/net type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
sysfs on /sys/devices/virtual/net type sysfs (rw,relatime)
sysfs on /sys/devices/virtual/net type sysfs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
lxcfs on /proc/cpuinfo type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/diskstats type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/meminfo type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/stat type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/swaps type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/uptime type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
devpts on /dev/console type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
devpts on /dev/ptmx type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
devpts on /dev/tty1 type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
devpts on /dev/tty2 type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=610840k,mode=755)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=419420k)
cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,size=12k)
cgmfs on /run/cgmanager/fs type tmpfs (rw,relatime,size=100k,mode=755)
systemd on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=209716k,mode=700)

blocks are used when they are written to, you only get them back if you trim inside the container/vm (in case of the vm the 'discard' option must be set on the disk and it must be on the virtio-scsi controller)

I dont have a VM, I use container. Does this still aplly to me or did you just tell in case of vm?
 
What is this option now called exactly? Trim? Discard?
In Proxmox I cannot find such an option..
wXXdQmG.png

daMgPjX.png

upload_2017-10-11_9-13-53.png
 
you have to run
Code:
fstrim <mount-point>
inside your container
 
I've run
Code:
fstrim /
but df -h still states
Code:
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/pve-vm--200--disk--1   98G   58G   36G  62% /
 
maybe there are deleted open files? (like apache log files)

try following command:
Code:
lsof -nP +L1 | sort -k 9 -n | numfmt --field=9 --to=iec --header

this gives a list of all still open but deleted files, sorted by size (biggest on the bottom)
 
In the container there is no output, on the host node there are some files, but not many and theyre not big, 5MB max.
 
Code:
LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- 213.00g             92.27  43.58
  root          pve -wi-ao----  77.75g
  swap          pve -wi-ao----   5.00g
vm-100-disk-1 pve Vwi-aotz-- 100.00g data 99.94
vm-102-disk-1 pve Vwi-aotz-- 200.00g data 5.08
vm-103-disk-1 pve Vwi-aotz-- 50.00g data 66.84
vm-104-disk-2 pve Vwi-aotz-- 100.00g data 20.67
vm-105-disk-1 pve Vwi-aotz-- 20.00g data 13.27
vm-106-disk-1 pve Vwi-aotz-- 50.00g data 8.74
vm-109-disk-1 pve Vwi-aotz-- 100.00g data 10.49
vm-149-disk-1 pve Vwi-aotz-- 10.00g data 23.77
vm-150-disk-1 pve Vwi-aotz-- 10.00g data 21.64
vm-200-disk-1 pve Vwi-aotz-- 100.00g data 5.09
vm-220-disk-1 pve Vwi-aotz-- 5.00g data 25.16
vm-221-disk-1 pve Vwi-aotz-- 5.00g data 26.79
vm-222-disk-1 pve Vwi-aotz-- 5.00g data 24.86
vm-254-disk-1 pve Vwi-aotz-- 3.00g data 45.77

here you can see which disk uses how much data, caution: the data column is in percent
 
I know, but I deleted the container 200 and "lvs" dont show
vm-200-disk-1 anymore but disk usage is still ~200GB, after deleting this container nothing changed in total disk usage.
As you can see I can only write 13 more GB and if I delete something nothing gets freed, so its a race against time.

Code:
root@limone:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- 213.00g             93.45  44.01
  root          pve -wi-ao----  77.75g
  swap          pve -wi-ao----   5.00g
  vm-100-disk-1 pve Vwi-aotz-- 100.00g data        99.94
  vm-102-disk-1 pve Vwi-aotz-- 200.00g data        8.25
  vm-103-disk-1 pve Vwi-aotz--  50.00g data        66.84
  vm-104-disk-2 pve Vwi-aotz-- 100.00g data        20.81
  vm-105-disk-1 pve Vwi-aotz--  20.00g data        13.93
  vm-106-disk-1 pve Vwi-aotz--  50.00g data        9.00
  vm-109-disk-1 pve Vwi-aotz-- 100.00g data        11.14
  vm-149-disk-1 pve Vwi-aotz--  10.00g data        25.67
  vm-150-disk-1 pve Vwi-aotz--  10.00g data        21.64
  vm-220-disk-1 pve Vwi-aotz--   5.00g data        25.16
  vm-221-disk-1 pve Vwi-aotz--   5.00g data        26.81
  vm-222-disk-1 pve Vwi-aotz--   5.00g data        24.86
  vm-254-disk-1 pve Vwi-aotz--   3.00g data        45.77
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!