Resizing lvm-thin after partition resizing while cloning

Sasha

Well-Known Member
Oct 18, 2018
86
1
48
Kazahstan
Hi!
I just cloned PM SSD disk 128G to SSD 256G. LVM partition was resized while cloning.

Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 528383 524288 256M EFI System
/dev/sda3 528384 500117503 499589120 238.2G Linux LVM

But
~# pvesm status
Name Type Status Total Used Available %
local dir active 30316484 12115860 16637592 39.96%
local-lvm lvmthin active 69984256 0 69984256 0.00%

# vgdisplay pve
VG Size <118.99 GiB
PE Size 4.00 MiB
Total PE 30461
Alloc PE / Size 28768 / <112.38 GiB
Free PE / Size 1693 / 6.61 GiB

Is it possible to resize local-lvm up to all new partition size?

Appreciate for any help.
 
Last edited:
Just got

pvresize /dev/sda3

pvdisplay


File descriptor 7 (pipe:[503718]) leaked on pvdisplay invocation. Parent PID 14224: bash

--- Physical volume ---
PV Name /dev/sda3
VG Name pve
PV Size 238.22 GiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 60984
Free PE 32216
Allocated PE 28768
PV UUID 0z4XF1-rLgE-EHcM-Myhl-HoJ9-Xife-i3C75S

vgdisplay

File descriptor 7 (pipe:[503718]) leaked on vgdisplay invocation. Parent PID 14224: bash

--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 61
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <238.22 GiB
PE Size 4.00 MiB
Total PE 60984
Alloc PE / Size 28768 / <112.38 GiB
Free PE / Size 32216 / 125.84 GiB
VG UUID axOcAq-XkwJ-lfMP-M9Sn-375R-5Y2A-3HECmp

pvscan

File descriptor 7 (pipe:[503718]) leaked on pvscan invocation. Parent PID 14224: bash
PV /dev/sda3 VG pve lvm2 [<238.22 GiB / 125.84 GiB free]
Total: 2 [1.14 TiB] / in use: 2 [1.14 TiB] / in no VG: 0 [0 ]

lvextend -l +100%FREE /dev/mapper/pve-data

File descriptor 7 (pipe:[503718]) leaked on lvextend invocation. Parent PID 14224: bash
Size of logical volume pve/data_tdata changed from 66.74 GiB (17086 extents) to <192.59 GiB (49302 extents).
Logical volume pve/data_tdata successfully resized.
 
Last edited:
Is this right and enough, guys?
I don't understand advices
xfs_growfs /
or
resize2fs /dev/mapper/pve-data

that i saw in forums and concerns filesystem adjusting...
 
Last edited:
Maybe this old post can help you:
Expanding a LVM partition to fill remaining drive space

And for a real case. Isn't exactly the same, but some years ago, I use this for a VM in Proxmox:
Lets say we have a NS7-VM under Proxmox 5.x and we want to increase the disk size from 500 up to 2000.

Under the Proxmox GUI:
1. Click on the NS7-VM > Hardware > Hard Disk (xxx) > Resize disk > 1500

On the NS7 Console:
2. fdisk /dev/sda > p > d > 2 > n > p > 2 > First sector <Enter> > Last sector <Enter> w

3. reboot the server If you get this:
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)

4. pvresize /dev/sda2

5. lvresize -l +100%FREE /dev/VolGroup/lv_root

6. xfs_growfs /dev/VolGroup/lv_root
 
  • Like
Reactions: Sasha
Appreciate You. It was a way that i've done. And the only thing i'd like to figure out
what is last step for? The matter is that i'm far from understanding of that nuances and working as all-inclusive-specialist

I mean i have volume group pve

And df -h just shows

Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 9.3M 1.6G 1% /run
/dev/mapper/pve-root 29G 12G 16G 43% /
tmpfs 7.8G 43M 7.8G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/fuse 30M 24K 30M 1% /etc/pve
tmpfs 1.6G 0 1.6G 0% /run/user/0

Using xfs_growfs leads to "it's not XFS"

resize2fs /dev/mapper/pve-data
resize2fs 1.44.5 (15-Dec-2018)
resize2fs: MMP: invalid magic number while trying to open /dev/mapper/pve-data
Couldn't find valid filesystem superblock.

resize2fs /dev/mapper/pve-root
resize2fs 1.44.5 (15-Dec-2018)
The filesystem is already 7733248 (4k) blocks long. Nothing to do!
 
...
Using xfs_growfs leads to "it's not XFS"
...

In my case the file system is XFS (NethServer use CentOS and XFS), I'm sure there must be the equivalent for the file system that applies in your case, which one?
The resize2fs program will resize ext2, ext3, or ext4 file systems...

Another example is in stackoverflow.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!