Help understanding free PEs in lvm thin pool

adystech

Member
Jan 2, 2023
11
2
8
Hello, I have a SSD backed lvm thin-pool. In the UI the it shows disk is only 10% (46.61 GB of 470.37 GB) used. In the UI if I try to create a new logical volume it complains about not enough free space left. checking pvdisplay does show `Free PE` is only 30 (around 120MB) left.

Could someone explain why can't I create a new volume when sum of all allocated logical volumes are certainly less than disk size.

Code:
root@proxmox:~ # lvs
  LV             VG             Attr       LSize    Pool           Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data           pve            twi-aotz--   54.30g                       0.00   1.59
  root           pve            -wi-ao----  <39.69g
  swap           pve            -wi-ao----    8.00g
  vm-100-disk-0  vm-crucial-ssd Vwi-aotz--   26.00g vm-crucial-ssd        82.66
  vm-102-disk-0  vm-crucial-ssd Vwi-aotz--    4.00g vm-crucial-ssd        33.98
  vm-104-disk-0  vm-crucial-ssd Vwi-a-tz--    4.00m vm-crucial-ssd        14.06
  vm-104-disk-1  vm-crucial-ssd Vwi-a-tz--   64.00g vm-crucial-ssd        29.12
  vm-104-disk-2  vm-crucial-ssd Vwi-a-tz--    4.00m vm-crucial-ssd        1.56
  vm-111-disk-0  vm-crucial-ssd Vwi-aotz--    4.00g vm-crucial-ssd        48.18
  vm-crucial-ssd vm-crucial-ssd twi-aotz-- <438.07g                       9.91   0.72
root@proxmox:~ # vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  39
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <118.74 GiB
  PE Size               4.00 MiB
  Total PE              30397
  Alloc PE / Size       26621 / <103.99 GiB
  Free  PE / Size       3776 / 14.75 GiB
  VG UUID               G342o9-trUn-6kfa-jCMb-BwNh-DJG9-Ud1G1L

  --- Volume group ---
  VG Name               vm-crucial-ssd
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  141
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                7
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <447.13 GiB
  PE Size               4.00 MiB
  Total PE              114465
  Alloc PE / Size       114435 / 447.01 GiB
  Free  PE / Size       30 / 120.00 MiB
  VG UUID               Md8MrM-79Lf-ZlAf-xAJ9-LQsG-i0l8-6p3ANw

root@proxmox:~ # pvdisplay
  --- Physical volume ---
  PV Name               /dev/nvme0n1p3
  VG Name               pve
  PV Size               118.74 GiB / not usable <3.32 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              30397
  Free PE               3776
  Allocated PE          26621
  PV UUID               2V7129-CgEf-NcnJ-DMYi-VVzD-nHa1-F3k9Dl

  --- Physical volume ---
  PV Name               /dev/sda1
  VG Name               vm-crucial-ssd
  PV Size               447.13 GiB / not usable <1.82 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              114465
  Free PE               30
  Allocated PE          114435
  PV UUID               pkpLlS-KnLj-DCpx-Rj9t-f6MH-2FFo-Vp6E5Z
root@proxmox:~ # lvcreate -L 4g -n gluster-data vm-crucial-ssd
  Volume group "vm-crucial-ssd" has insufficient free space (30 extents): 1024 required.

1677471616208.png
1677471637514.png
1677471655174.png
 
Last edited:
I dont reproduce it with my version, but for me it seems lvm2 has a bug. Proxmox show it like lvm2 represent it and 10 % is used. That is correct. The free PE are wrong. or here u can find an answer.
 
VG Name vm-crucial-ssd
this is your volume group
Free PE / Size 30 / 120.00 MiB
this is how much "free" space you have in your volume group
vm-crucial-ssd vm-crucial-ssd twi-aotz-- <438.07g 9.91 0.72
this is a (t)hin pool, named the same as your volume group, possibly where your other LVs were provisioned from

lvcreate -L 4g -n gluster-data vm-crucial-ssd Volume group "vm-crucial-ssd" has insufficient free space (30 extents): 1024 required.
You dont have 4g of free space left to create another LV on your VG.

You should read through this https://www.tecmint.com/setup-thin-provisioning-volumes-in-lvm/ and note the correct command to create an LV in thin pool in "Creating Thin Volumes" portion


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited: