Hi all,
I was receiving errors starting some CTs and VMs and realized it was because i ran out of room. But i can't figure out why they are so big.
Here are some outputs from the node and also inside VM102. My question is why is it (and potentially others) so big? what can i do ?
on the node (pve1)
For instance, where i gave VM102 200G disk size, because that was a max it could use, and if it only uses 50G then that's all it will take up... right?
I've deleted a few CTs to give enough space to start some up and gopt VM102, up, here is the lsblk from inside VM102
However ncdu -x / from inside VM102 gives:
I was receiving errors starting some CTs and VMs and realized it was because i ran out of room. But i can't figure out why they are so big.
Here are some outputs from the node and also inside VM102. My question is why is it (and potentially others) so big? what can i do ?
on the node (pve1)
Code:
root@pve1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,iso,backup
lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
cifs: nas
path /mnt/pve/nas
server 192.168.1.120
share proxmox
content iso,backup,rootdir,vztmpl,images
prune-backups keep-all=1
username root
Code:
root@pve1:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
└─sda3 8:3 0 465.3G 0 part
├─pve-swap 252:0 0 7G 0 lvm [SWAP]
├─pve-root 252:1 0 15.8G 0 lvm /
├─pve-data_tmeta 252:2 0 1G 0 lvm
│ └─pve-data-tpool 252:4 0 440.5G 0 lvm
│ ├─pve-data 252:5 0 440.5G 1 lvm
│ ├─pve-vm--103--disk--0 252:6 0 80G 0 lvm
│ ├─pve-vm--103--disk--1 252:7 0 4M 0 lvm
│ ├─pve-vm--102--disk--0 252:8 0 200G 0 lvm
│ ├─pve-vm--190--disk--0 252:9 0 4G 0 lvm
│ ├─pve-vm--194--disk--0 252:10 0 50G 0 lvm
│ ├─pve-vm--195--disk--0 252:11 0 100G 0 lvm
│ ├─pve-vm--193--disk--0 252:13 0 100G 0 lvm
│ ├─pve-vm--192--disk--0 252:14 0 100G 0 lvm
│ ├─pve-vm--199--disk--0 252:15 0 32G 0 lvm
│ └─pve-vm--198--disk--0 252:16 0 32G 0 lvm
└─pve-data_tdata 252:3 0 440.5G 0 lvm
└─pve-data-tpool 252:4 0 440.5G 0 lvm
├─pve-data 252:5 0 440.5G 1 lvm
├─pve-vm--103--disk--0 252:6 0 80G 0 lvm
├─pve-vm--103--disk--1 252:7 0 4M 0 lvm
├─pve-vm--102--disk--0 252:8 0 200G 0 lvm
├─pve-vm--190--disk--0 252:9 0 4G 0 lvm
├─pve-vm--194--disk--0 252:10 0 50G 0 lvm
├─pve-vm--195--disk--0 252:11 0 100G 0 lvm
├─pve-vm--193--disk--0 252:13 0 100G 0 lvm
├─pve-vm--192--disk--0 252:14 0 100G 0 lvm
├─pve-vm--199--disk--0 252:15 0 32G 0 lvm
└─pve-vm--198--disk--0 252:16 0 32G 0 lvm
For instance, where i gave VM102 200G disk size, because that was a max it could use, and if it only uses 50G then that's all it will take up... right?
I've deleted a few CTs to give enough space to start some up and gopt VM102, up, here is the lsblk from inside VM102
Code:
austempest@ubuntu-server:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 55.7M 1 loop /snap/core18/2812
loop1 7:1 0 55.7M 1 loop /snap/core18/2823
loop2 7:2 0 63.9M 1 loop /snap/core20/2264
loop3 7:3 0 63.9M 1 loop /snap/core20/2318
loop4 7:4 0 91.8M 1 loop /snap/lxd/23991
loop5 7:5 0 91.8M 1 loop /snap/lxd/24061
loop6 7:6 0 38.7M 1 loop /snap/snapd/21465
loop7 7:7 0 38.8M 1 loop /snap/snapd/21759
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 99G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 99G 0 lvm /
sr0 11:0 1 1024M 0 rom
However ncdu -x / from inside VM102 gives:
Code:
ncdu 1.15.1 ~ Use the arrow keys to navigate, press ? for help
--- / -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
4.0 GiB [##########] swap.img
. 3.9 GiB [######### ] /var
3.6 GiB [######### ] /usr
. 2.1 GiB [##### ] /home
81.1 MiB [ ] /volume2
10.2 MiB [ ] core
.... and more folders less than 10MiB