Understanding thin provisioning

rcd

Active Member
Jul 12, 2019
245
23
38
62
From Proxmox, the hypervisor:
Code:
[root@pve ~]# vgs
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   2   6   0 wz--n- 873.72g 573.53g
[root@pve ~]# lvs
  LV              VG  Attr       LSize   Pool   Origin          Data%  Meta%  Move Log Cpy%Sync Convert
  base-101-disk-0 pve Vri---tz-k  32.00g vmdata
  tz              pve -wi-ao---- 100.00g
  vm-100-disk-0   pve Vwi-aotz--  32.00g vmdata                 19.28
  vm-102-disk-0   pve Vwi-aotz--  80.00g vmdata base-101-disk-0 9.94
  vm-103-disk-0   pve Vwi-aotz--  32.00g vmdata base-101-disk-0 23.68
  vmdata          pve twi-aotz-- 200.00g                        10.42  21.92
[root@pve ~]#

As you can see above, I have created 3 VM's with each a 32GB thin provisioned disk. I then extended vm-102 to 80g, and it shows (correctly) that only about 9% is used.

Now the problem is, when I go into the actual VM, which is a Centos-7 server using LVM, it still shows the original 32G disk. I could of course expand that to fill the 80G but then what would be the point of using thin provisioning?

Code:
[root@vm-102 ~]# pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/sda2  centos lvm2 a--  <31.00g 4.00m
[root@vm-102 ~]# vgs
  VG     #PV #LV #SN Attr   VSize   VFree
  centos   1   2   0 wz--n- <31.00g 4.00m
[root@vm-102 ~]# lvs
  LV   VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root centos -wi-ao---- 28.99g
  swap centos -wi-ao----  2.00g
[root@vm-102 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   29G  5.4G   24G  19% /
devtmpfs                 3.9G     0  3.9G   0% /dev
tmpfs                    3.9G  4.0K  3.9G   1% /dev/shm
tmpfs                    3.9G  8.8M  3.9G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1               1014M  232M  783M  23% /boot
/dev/loop0               1.3G  2.3M  1.2G   1% /tmp
tmpfs                    594M     0  594M   0% /run/user/0
[root@vm-102 ~]#

Probably a matter of me not quite understanding what I'm doing here, but then again, that's why I do it - to learn :) - so if some kind souls could ELI5 to me so i could make sense of it, eventually how I can set up the VM so it appears to have whatever I allocated max, but only use what it needs - such as I expect is the purpose of thin provisioning. Thanks!!
 
Last edited:
In you guest, you need to rescan to changed disks, resize your physical volume and then you should see the space there.

Thin provisioning in practise means that you do only allocate the space if you write to your disk. Unfortunately, freeing previously used space is a totally different matter and depends on a lot.
 
I don't know if I misunderstand you - yes I know one way would be to rescan and resize the disk in the VM, but would that not take up all the space on the host?

EDIT: Ok, i went ahead with the (fdisk - pvextend - lvextend) and wouldn't you know, it now appears within the VM as if it has 80 GB, but on Proxmox it still only use the 9%. Smooth! :)
 
Last edited:
I don't know if I misunderstand you - yes I know one way would be to rescan and resize the disk in the VM, but would that not take up all the space on the host?

Yes and no. Every byte you write on your guest disk will be backed in your thin LVM. Your LV is spit into extends (normally 2 MB) and if you write at least 1 byte to it, it'll be allocated. Partition table (e.g. GPT), pv identification, ext4 superblock copies etc. will write to the disk, but only a few bytes or extends, so your new space will not be allocated at once, but it'll be more space than before.
 
Hm, I dunno, I tried extending the disk of another vm, and that didn't automaticlaly change the device size. What did I do different?

So again, vgextend followed by rescan:

Code:
# lvs
  LV              VG  Attr       LSize   Pool   Origin          Data%  Meta%  Move Log Cpy%Sync Convert
  base-101-disk-0 pve Vri---tz-k  32.00g vmdata
  tz              pve -wi-ao---- 100.00g
  vm-100-disk-0   pve Vwi-aotz--  32.00g vmdata                 77.40
  vm-100-disk-1   pve Vwi-aotz-- 100.00g vmdata                 78.14
  vm-102-disk-0   pve Vwi-aotz--  80.00g vmdata base-101-disk-0 85.44
  vm-103-disk-0   pve Vwi-aotz--  32.00g vmdata base-101-disk-0 25.41
  vm-104-disk-0   pve Vwi-aotz--  32.00g vmdata                 10.43
  vm-105-disk-0   pve Vwi-aotz-- 132.00g vmdata base-101-disk-0 84.42
  vmdata          pve twi-aotz-- 500.00g                        58.94  14.76
# lvextend -L 150G pve/vm-105-disk-0
  Size of logical volume pve/vm-105-disk-0 changed from 132.00 GiB (33792 extents) to 150.00 GiB (38400 extents).
  Logical volume pve/vm-105-disk-0 successfully resized.
# lvs
  LV              VG  Attr       LSize   Pool   Origin          Data%  Meta%  Move Log Cpy%Sync Convert
  base-101-disk-0 pve Vri---tz-k  32.00g vmdata
  tz              pve -wi-ao---- 100.00g
  vm-100-disk-0   pve Vwi-aotz--  32.00g vmdata                 77.40
  vm-100-disk-1   pve Vwi-aotz-- 100.00g vmdata                 78.14
  vm-102-disk-0   pve Vwi-aotz--  80.00g vmdata base-101-disk-0 85.44
  vm-103-disk-0   pve Vwi-aotz--  32.00g vmdata base-101-disk-0 25.41
  vm-104-disk-0   pve Vwi-aotz--  32.00g vmdata                 10.43
  vm-105-disk-0   pve Vwi-aotz-- 150.00g vmdata base-101-disk-0 74.29
  vmdata          pve twi-aotz-- 500.00g                        58.94  14.76
# qm rescan --vmid 105
rescan volumes...
VM 105: update disk 'scsi0' information.

Still, when i go to the vm and run fdisk, it still shows the same number of blocks as before.
 
Proxmox stuff (and maybe documentation too) suggest to only use the "resize disk" option inside of the GUI (or API) and go through their logic. The VM configuration will be adapted automatically and maybe KVM/QEMU informed about the disk geometry change.
 
So yet another thin lvm question:

Code:
  LV              VG  Attr       LSize   Pool   Origin          Data%  Meta%  Move Log Cpy%Sync Convert
  base-101-disk-0 pve Vri---tz-k  32.00g vmdata
  tz              pve -wi-ao---- 250.00g
  vm-103-disk-0   pve Vwi-aotz--  32.00g vmdata base-101-disk-0 18.66
  vm-104-disk-0   pve Vwi-aotz--  32.00g vmdata                 9.55
  vm-105-disk-0   pve Vwi-aotz-- 200.00g vmdata base-101-disk-0 62.68
  vm-106-disk-0   pve Vwi-aotz--  32.00g vmdata                 78.07
  vmdata          pve twi-aotz-- 500.00g                        32.25  8.87

I have here a thin pool of 500g with 4 32g LV's and one 200g LV - so ~330g used of 500g. Do I understand that correct?

Can I create another thin LV of ~170g and still be within the 500g thin pool?

What happens if I make a, say, 200g LV? Does the thin pool and/or LV break or I just get an error/warning?

Is there any easy way to see how much space is "left" in a thin pool? Does it even matter? I mean until the allocated space is actually used it shouldn't matter at all?

So many questions, sorry :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!