Running out of space on LVM Thin

sheen

New Member
Dec 5, 2023
2
0
1
Hi. I seem to be running put of space for my VMs. In my opinion I should have a lot of space. The root volume for the proxmox server seems to take a lot of space like 68 GB.
Can I take part of it and assign to the VMs? Or is that space automatically assiogned to the VMs i.e. thin pool.

Here is how my config looks:


Code:
root@pve:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- 150.00g             95.54  4.36
  root          pve -wi-ao----  68.06g
  swap          pve -wi-ao----  <7.63g
  vm-100-disk-0 pve Vwi-aotz-- 150.00g data        92.24
  vm-101-disk-0 pve Vwi-a-tz--  16.00g data        30.93


Code:
root@pve:~# pvs
  PV         VG  Fmt  Attr PSize   PFree
  /dev/sda3  pve lvm2 a--  231.88g 3.39g

Code:
root@pve:~# vgdisplay pve
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  33
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                5
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               231.88 GiB
  PE Size               4.00 MiB
  Total PE              59362
  Alloc PE / Size       58493 / <228.49 GiB
  Free  PE / Size       869 / 3.39 GiB
  VG UUID               ICbruK-bD25-8Tob-3KMb-cKZq-W0Ba-m5hv4m

Code:
root@pve:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  7.8G     0  7.8G   0% /dev
tmpfs                 1.6G  1.1M  1.6G   1% /run
/dev/mapper/pve-root   67G   21G   43G  32% /
tmpfs                 7.8G   46M  7.8G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
efivarfs              118K   46K   68K  41% /sys/firmware/efi/efivars
/dev/sda2            1022M   12M 1011M   2% /boot/efi
/dev/fuse             128M   20K  128M   1% /etc/pve
tmpfs                 1.6G     0  1.6G   0% /run/user/0

Thank you.
 
The entire LV allocation for "data" is 150GB, and vm-100-disk-0 is also 150GB. On top of that, you have allocated +16GB for vm-101-disk-0

You have overprovisioned your virtual environment and do not have enough physical disk to have both vdisks in that space.

250GB is barely enough for the OS rootfs and a bit of lvm-thin, it's not really enough to run VMs on - especially if you need to do any snapshots.

You cannot reasonably expect to run a hypervisor and any decent amount of VMs in that kind of tiny environment. Maybe a few LXCs with ~20GB vdisks, but even then it's unreasonably constrained. Modern standards have changed. Disk is cheap, and you need to invest in your infrastructure if you want a virtual playground with a bit of room to grow.

I recommend you add another physical disk to the system (at least 1TB), make a ZFS single-disk pool on it, and move your VM virtual disks to that pool.

Or even better, add +2 disks and mirror them in a ZFS RAID1 - you will have separation of OS + VMs / data and the RAID will give you a chance to replace a failed disk with no downtime.

Also Remember that you still need backups.