LVM Thin Provisionning Exausted ?

tne

New Member
Apr 6, 2021
9
0
1
55
Hello !

I've been preparing a PVE host for several weeks and I was almost ready to put it in production. I added a 2TB disk to use for vmdata using the Administration Guide, using
thin provisioning :
Code:
lvcreate -L 1.8T -T -c 256K -n vmstore vmdata
That differs from the Guide with the addition of "-c 256K" since I couln't create the thin pool otherwise... But there are no indication about its ideal value.

After adding all my VMs I started a job generating A LOT of data in one of them (the VM had a 256GB virtual disk) but a few minutes after that I got a lot of errors :
[...] Jul 2 12:04:42 pve kernel: [ 147.745197] sd 0:0:0:0: [sda] tag#5 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK Jul 2 12:04:42 pve kernel: [ 147.745199] sd 0:0:0:0: [sda] tag#5 CDB: Read(10) 28 00 00 00 08 00 00 01 00 00 [...]

Constantly repeated. As long as I try to start a VM, I get these errors.
At first I thought it was due to a hardware failure. However after both digging on the net and my system logs, looking at commands results such as smartctl, I am convinced the problem is tied to the Thin-Pool. Indeed, in the logs I could find :
Jul 2 12:22:31 pve lvm[472]: WARNING: Thin pool vmdata-vmstore-tpool metadata is now 100.00% full. Jul 2 12:22:31 pve lvm[472]: WARNING: Thin pool vmdata-vmstore-tpool data is now 100.00% full.

So... What do you think are my options at this point ?
The Thin-Pool takes up all of the disk : how can I resize it... without loosing data ?
Maybe by growing the chunk size and having less chunks if there is a command like that ?

Note: I've just read another thread that ends with the conclusion "If you run out of space for metatada, all data will be lost. Backup your VMs as soon as possible." I only have 1 VM I'd like to backup. Could it be considered to remove them all but the one I'd like to keep, make a backup and reconfigure the thin-pool ?

Thanks in advance...
 
Last edited:
Knowing I'd like to keep the VM associated with "vm-101-disk-0" and that the culprit is the one associated with "vm-103-disk-0"...

Here are some commands output :

Code:
# pvs
  PV         VG     Fmt  Attr PSize  PFree
/dev/sdc1  vmdata lvm2 a--  <1,82t 18,90g

Code:
# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdc1
  VG Name               vmdata
  PV Size               <1,82 TiB / not usable <4,07 MiB
  Allocatable           yes
  PE Size               4,00 MiB
  Total PE              476931
  Free PE               4839
  Allocated PE          472092
  PV UUID               fIQZbp-LIpu-IDcF-clUj-LGKX-fWgc-TSOunx

Code:
# vgs
VG     #PV #LV #SN Attr   VSize  VFree
vmdata   1   5   0 wz--n- <1,82t 18,90g

Code:
# vgdisplay
  --- Volume group ---
  VG Name               vmdata
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  33
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                5
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <1,82 TiB
  PE Size               4,00 MiB
  Total PE              476931
  Alloc PE / Size       472092 / 1,80 TiB
  Free  PE / Size       4839 / 18,90 GiB
  VG UUID               Pkzg94-xAi2-Ovui-hnL3-eFty-LYR9-rvFMQb

Code:
# lvs -a vmdata
  LV              VG     Attr       LSize   Pool    Origin Data%  Meta%  Move Log Cpy%Sync Convert
[lvol0_pmspare] vmdata ewi------- 464,00m
vm-100-disk-0   vmdata Vwi-a-tz--   2,00g vmstore        10,33
vm-101-disk-0   vmdata Vwi-a-tz-- 128,00g vmstore        20,93
vm-102-disk-0   vmdata Vwi-a-tz--  32,00g vmstore        39,66
vm-103-disk-0   vmdata Vwi-a-tz-- 256,00g vmstore        12,51
vmstore         vmdata twi-aotz--   1,80t                3,89   5,57
[vmstore_tdata] vmdata Twi-ao----   1,80t
[vmstore_tmeta] vmdata ewi-ao---- 464,00m

Code:
# lvdisplay 
  --- Logical volume ---
  LV Name                vmstore
  VG Name                vmdata
  LV UUID                ev0HvJ-crjU-u1OI-gRfI-bDPS-ITMX-f4cYrh
  LV Write Access        read/write (activated read only)
  LV Creation host, time pve, 2021-04-07 12:20:14 +0200
  LV Pool metadata       vmstore_tmeta
  LV Pool data           vmstore_tdata
  LV Status              available
  # open                 5
  LV Size                1,80 TiB
  Allocated pool data    3,89%
  Allocated metadata     5,57%
  Current LE             471860
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
  
  --- Logical volume ---
  LV Path                /dev/vmdata/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                vmdata
  LV UUID                f7Tehi-ejEc-VosI-e50f-x8BX-GkZ1-Tcw68r
  LV Write Access        read/write
  LV Creation host, time pve, 2021-04-07 13:56:52 +0200
  LV Pool name           vmstore
  LV Status              available
  # open                 0
  LV Size                2,00 GiB
  Mapped size            10,33%
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

  --- Logical volume ---
  LV Path                /dev/vmdata/vm-101-disk-0
  LV Name                vm-101-disk-0
  VG Name                vmdata
  LV UUID                SFFkSH-WCIe-llXL-PhDb-fE1C-sXpd-oIBMh3
  LV Write Access        read/write
  LV Creation host, time pve, 2021-04-09 13:40:05 +0200
  LV Pool name           vmstore
  LV Status              available
  # open                 0
  LV Size                128,00 GiB
  Mapped size            20,93%
  Current LE             32768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5

  --- Logical volume ---
  LV Path                /dev/vmdata/vm-102-disk-0
  LV Name                vm-102-disk-0
  VG Name                vmdata
  LV UUID                VZBRI7-drJm-sMrn-f6ZF-z2Sh-v0Tf-ETEn8s
  LV Write Access        read/write
  LV Creation host, time pve, 2021-06-21 09:47:24 +0200
  LV Pool name           vmstore
  LV Status              available
  # open                 0
  LV Size                32,00 GiB
  Mapped size            39,66%
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

  --- Logical volume ---
  LV Path                /dev/vmdata/vm-103-disk-0
  LV Name                vm-103-disk-0
  VG Name                vmdata
  LV UUID                FkRtLY-oWzp-ThfX-arvy-6eHW-tr64-dWh3Hw
  LV Write Access        read/write
  LV Creation host, time pve, 2021-06-21 13:05:47 +0200
  LV Pool name           vmstore
  LV Status              available
  # open                 0
  LV Size                256,00 GiB
  Mapped size            12,51%
  Current LE             65536
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7
 
Last edited: