How to properly resize LVM-Thin volume?

DomKnigi

New Member
Sep 10, 2017
7
0
1
120
Greetings!

We have just installed this amazing VE. Still running without any serious problems but a few.
We use lvm-thin storage for the vm in question and need to reduce one of it's disk down in size. Initially disk was about 480Gb in size.

First we cut the filesystem inside a vm. Next stopped vm and run the following:

lvreduce /dev/pve/vm-107-disk-2 --size 320G -v

It said:

WARNING: Reducing active logical volume to 320.00 GiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce pve/vm-107-disk-2? [y/n]: y
Accepted input: [y]
Archiving volume group "pve" metadata (seqno 224).
Reducing logical volume pve/vm-107-disk-2 to 320.00 GiB
Size of logical volume pve/vm-107-disk-2 changed from 330.00 GiB (84480 extents) to 320.00 GiB (81920 extents).
Loading pve-data_tdata table (253:3)
Suppressed pve-data_tdata (253:3) identical table reload.
Loading pve-data_tmeta table (253:2)
Suppressed pve-data_tmeta (253:2) identical table reload.
Loading pve-data-tpool table (253:4)
Suppressed pve-data-tpool (253:4) identical table reload.
Loading pve-vm--107--disk--2 table (253:14)
Not monitoring pve/data with libdevmapper-event-lvm2thin.so
Suspending pve-vm--107--disk--2 (253:14) with device flush
Suspending pve-data-tpool (253:4) with device flush
Suspending pve-data_tdata (253:3) with device flush
Suspending pve-data_tmeta (253:2) with device flush
Loading pve-data_tdata table (253:3)
Suppressed pve-data_tdata (253:3) identical table reload.
Loading pve-data_tmeta table (253:2)
Suppressed pve-data_tmeta (253:2) identical table reload.
Loading pve-data-tpool table (253:4)
Suppressed pve-data-tpool (253:4) identical table reload.
Resuming pve-data_tdata (253:3)
Resuming pve-data_tmeta (253:2)
Resuming pve-data-tpool (253:4)
Resuming pve-vm--107--disk--2 (253:14)
Monitoring pve/data
Creating volume group backup "/etc/lvm/backup/pve" (seqno 225).
Logical volume pve/vm-107-disk-2 successfully resized.

After that I see a warning in lvs output:

root@pve:~# lvs
WARNING: Thin volume pve/vm-107-disk-2 maps 463.49 GiB while the size is only 320.00 GiB.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 1.52t 93.67 46.51
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-100-disk-1 pve Vwi-aotz-- 40.00g data 40.18
vm-101-disk-1 pve Vwi-aotz-- 40.00g data 37.36
vm-102-disk-4 pve Vwi-aotz-- 40.00g data 75.55
vm-103-disk-1 pve Vwi-a-tz-- 32.00g data 24.45
vm-104-disk-1 pve Vwi-aotz-- 80.00g data 55.43
vm-104-disk-2 pve Vwi-aotz-- 600.00g data 99.99
vm-105-disk-2 pve Vwi-aotz-- 32.00g data 70.70
vm-107-disk-2 pve Vwi-aotz-- 320.00g data 100.00
vm-108-disk-1 pve Vwi-aotz-- 40.00g data 99.43
vm-108-disk-2 pve Vwi-aotz-- 250.00g data 86.17

Can someone tell me is it dangerous or not and how to get rid of that warning?
 
Ah, configuration worth mention, I think...

It is a Proxmox Virtual Environment 5.0-23 installed on a Dell PowerEdge R720 server. So it's a real hardware machine.
 
Reducing the size of filesystems or block devices is always finicky. Basically you always want to shrink things from the inside out. So first you shrink the FS, then the partition, then the disk or volume.
The error says, in plain English: "You are allowing the VM to use 463.49GiB of a 320GiB disk. Things are bound to break if it uses more than 320GiB, so don't act all surprised when it comes to this."
To get rid of the warning you want to shrink the FS in the VM to a bit smaller than necessary, then resize the partition to fit the new disk size, then grow the FS to fit.
 
Hi pabernethy! Thanks for your reply.

But I've already done this. You can see my partitions in a VM are aware of only 320Gb Total, not 480 as it initially was.

upload_2017-9-13_12-15-38.png
 

Attachments

  • upload_2017-9-13_12-15-24.png
    upload_2017-9-13_12-15-24.png
    17.4 KB · Views: 28
Did you defragment the FS before resizing?
 
If you use GPT you have the last usable LBA in there, which might result in that mapping, if not updated during resizing.
 
Last edited:
Hello Symbol!

Yes, it could be, thanks!
As that bug mentioned we need to take a snapshot of the volume to reproduce. And yes, we have been doing snapshot-based backups of this disk before resizing.

So is it an lvm-thin oriented bug and what do we need to do to resolve it? Would deleting the VM and restoring it from backup be enough?
 
Well, in https://bugzilla.redhat.com/show_bug.cgi?id=1459646#c3, Redback's engineer Zdenek Kabelac talks about trimming with blkdiscard.
I don't know if there's a snapshot that you can see with lvs -a or if you should use that on the LV with specifying the blocks beyond the new size, or if you can only use it before resizing... You may ask him :)
As you can destroy everything with this, you may also test what happens within a simple linux with an lvm thin pool within a VM...
 
I ran into this too. To fix it i had to make a backup of the affected VM. Then blow it away and do a restore.
That was the only thing that worked for me.

Would like to be able to shrink the VM without having to do a restore though !!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!