Size of VM on local-lvm confusing

austempest

Member
Jan 2, 2022
2
0
6
38
Hi all,

I was receiving errors starting some CTs and VMs and realized it was because i ran out of room. But i can't figure out why they are so big.

Here are some outputs from the node and also inside VM102. My question is why is it (and potentially others) so big? what can i do ?

on the node (pve1)
Code:
root@pve1:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

cifs: nas
        path /mnt/pve/nas
        server 192.168.1.120
        share proxmox
        content iso,backup,rootdir,vztmpl,images
        prune-backups keep-all=1
        username root

Code:
root@pve1:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0 465.8G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0   512M  0 part /boot/efi
└─sda3                         8:3    0 465.3G  0 part
  ├─pve-swap                 252:0    0     7G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0  15.8G  0 lvm  /
  ├─pve-data_tmeta           252:2    0     1G  0 lvm
  │ └─pve-data-tpool         252:4    0 440.5G  0 lvm
  │   ├─pve-data             252:5    0 440.5G  1 lvm
  │   ├─pve-vm--103--disk--0 252:6    0    80G  0 lvm
  │   ├─pve-vm--103--disk--1 252:7    0     4M  0 lvm
  │   ├─pve-vm--102--disk--0 252:8    0   200G  0 lvm
  │   ├─pve-vm--190--disk--0 252:9    0     4G  0 lvm
  │   ├─pve-vm--194--disk--0 252:10   0    50G  0 lvm
  │   ├─pve-vm--195--disk--0 252:11   0   100G  0 lvm
  │   ├─pve-vm--193--disk--0 252:13   0   100G  0 lvm
  │   ├─pve-vm--192--disk--0 252:14   0   100G  0 lvm
  │   ├─pve-vm--199--disk--0 252:15   0    32G  0 lvm
  │   └─pve-vm--198--disk--0 252:16   0    32G  0 lvm
  └─pve-data_tdata           252:3    0 440.5G  0 lvm
    └─pve-data-tpool         252:4    0 440.5G  0 lvm
      ├─pve-data             252:5    0 440.5G  1 lvm
      ├─pve-vm--103--disk--0 252:6    0    80G  0 lvm
      ├─pve-vm--103--disk--1 252:7    0     4M  0 lvm
      ├─pve-vm--102--disk--0 252:8    0   200G  0 lvm
      ├─pve-vm--190--disk--0 252:9    0     4G  0 lvm
      ├─pve-vm--194--disk--0 252:10   0    50G  0 lvm
      ├─pve-vm--195--disk--0 252:11   0   100G  0 lvm
      ├─pve-vm--193--disk--0 252:13   0   100G  0 lvm
      ├─pve-vm--192--disk--0 252:14   0   100G  0 lvm
      ├─pve-vm--199--disk--0 252:15   0    32G  0 lvm
      └─pve-vm--198--disk--0 252:16   0    32G  0 lvm

For instance, where i gave VM102 200G disk size, because that was a max it could use, and if it only uses 50G then that's all it will take up... right?

I've deleted a few CTs to give enough space to start some up and gopt VM102, up, here is the lsblk from inside VM102

Code:
austempest@ubuntu-server:~$ lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0                       7:0    0 55.7M  1 loop /snap/core18/2812
loop1                       7:1    0 55.7M  1 loop /snap/core18/2823
loop2                       7:2    0 63.9M  1 loop /snap/core20/2264
loop3                       7:3    0 63.9M  1 loop /snap/core20/2318
loop4                       7:4    0 91.8M  1 loop /snap/lxd/23991
loop5                       7:5    0 91.8M  1 loop /snap/lxd/24061
loop6                       7:6    0 38.7M  1 loop /snap/snapd/21465
loop7                       7:7    0 38.8M  1 loop /snap/snapd/21759
sda                         8:0    0  200G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0   99G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0   99G  0 lvm  /
sr0                        11:0    1 1024M  0 rom

However ncdu -x / from inside VM102 gives:
Code:
ncdu 1.15.1 ~ Use the arrow keys to navigate, press ? for help
--- / -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    4.0 GiB [##########]  swap.img
.   3.9 GiB [######### ] /var
    3.6 GiB [######### ] /usr
.   2.1 GiB [#####     ] /home
   81.1 MiB [          ] /volume2
   10.2 MiB [          ]  core
   .... and more folders less than 10MiB
 
incase it helps, here is the output of df -h and vgs and lvs on the host

Code:
root@pve1:~# df -h
Filesystem               Size  Used Avail Use% Mounted on
udev                     7.8G     0  7.8G   0% /dev
tmpfs                    1.6G  1.2M  1.6G   1% /run
/dev/mapper/pve-root      16G  8.9G  5.8G  61% /
tmpfs                    7.8G   46M  7.8G   1% /dev/shm
tmpfs                    5.0M     0  5.0M   0% /run/lock
efivarfs                 128K  113K   11K  92% /sys/firmware/efi/efivars
/dev/sda2                511M  328K  511M   1% /boot/efi
/dev/fuse                128M   20K  128M   1% /etc/pve
//192.168.1.120/proxmox   42T   29T   14T  69% /mnt/pve/nas
tmpfs                    1.6G     0  1.6G   0% /run/user/0

Code:
root@pve1:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1  15   0 wz--n- <465.26g    0

Code:
root@pve1:~# lvs
  LV                               VG  Attr       LSize    Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  base-901-disk-0                  pve Vri---tz-k   50.00g data
  data                             pve twi-aotz-- <440.51g                    94.21  18.47
  root                             pve -wi-ao----   15.75g
  snap_vm-190-disk-0_pihole_220108 pve Vri---tz-k    4.00g data vm-190-disk-0
  swap                             pve -wi-ao----    7.00g
  vm-102-disk-0                    pve Vwi-aotz--  200.00g data               46.09
  vm-103-disk-0                    pve Vwi-a-tz--   80.00g data               27.51
  vm-103-disk-1                    pve Vwi-a-tz--    4.00m data               3.12
  vm-190-disk-0                    pve Vwi-a-tz--    4.00g data               92.10
  vm-192-disk-0                    pve Vwi-aotz--  100.00g data               76.48
  vm-193-disk-0                    pve Vwi-aotz--  100.00g data               99.51
  vm-194-disk-0                    pve Vwi-aotz--   50.00g data               38.68
  vm-195-disk-0                    pve Vwi-aotz--  100.00g data               92.40
  vm-198-disk-0                    pve Vwi-a-tz--   32.00g data               8.95
  vm-199-disk-0                    pve Vwi-aotz--   32.00g data               12.97


here is the same three commands within VM102
Code:
austempest@ubuntu-server:~$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
tmpfs                              666M  1.4M  664M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   98G   21G   73G  23% /
tmpfs                              3.3G     0  3.3G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
/dev/sda2                          974M  253M  654M  28% /boot
192.168.1.120:/volume1/nas1         42T   29T   14T  69% /media/nas/nas1
192.168.1.120:/volume2/nas2         11T  1.6T  9.0T  15% /media/nas/nas2
tmpfs                              676M  4.0K  676M   1% /run/user/1000

Code:
austempest@ubuntu-server:~$ sudo vgs
[sudo] password for austempest:
  VG        #PV #LV #SN Attr   VSize   VFree
  ubuntu-vg   1   1   0 wz--n- <99.00g    0

Code:
austempest@ubuntu-server:~$ sudo lvs
  LV        VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  ubuntu-lv ubuntu-vg -wi-ao---- <99.00g
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!