[SOLVED] Cannot extend volume group ( over iscsi ).

Pigi_102

New Member
May 24, 2024
17
4
3
Hello all.
I have a cluster ( 3 nodes ) for testing purpose, connected to a TrueNas, experimenting several storage backend.
One of those storage is an iscsi lun where I have created an LVM.
The lun ( FWIW ) is seen from nodes as /dev/sdc and has not been partitioned ( I'm quite sure I did everything from the GUI... )
Now I have expanded the LUN from 20G to 40G and this can be seen from vgs:
Code:
root@pve1:~# vgdisplay vg-iscsi
  --- Volume group ---
  VG Name               vg-iscsi
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  8
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               19.99 GiB
  PE Size               4.00 MiB
  Total PE              5118
  Alloc PE / Size       0 / 0   
  Free  PE / Size       5118 / 19.99 GiB
  VG UUID               nvaG5c-pyq1-YRJC-n9H4-Yasl-QQvy-hMwsVP

and as you can see, the free space has been "seen" from the sdc side but not from the vg side.
Being that there is not a ( real ) lvm over it, I cannot do an lvextend -l100%FREE as I would in a normal lvm.
I've tried to move over this lvm a disk from a virtual machine sligltly bigger than the actual size, but smaller than the "extended" size and I got a
"Volume group "vg-iscsi" has insufficient free space (5118 extents): 6656 required"

How am I supposed to use this new space ?
I rather not delete and create the lvm as I think it'ìs not the right way to do this...

Thanks in advance.


Pigi_102
 
Hi @Pigi_102 ,
You may want to look at this article : https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/
While it does not specifically address extension case, it does illustrate the LVM structure. You will notice that before the VG there is something called Physical Volume. As you extend your available space, you need to do it at each layer.

There are many articles online that will guide you, for example: https://azvise.com/2020/09/03/linux-extend-lvm-partition-after-resizing-disk/

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
hello and thanks for your reply.
I know quite well how an lvm works, ( I'm RHCSA and RHCE from long time .... ) the physical disk has been extended and that's why the PE free are
Free PE / Size 5118 / 19.99 GiB

what's missing is how to get those Free PE seen by proxmox being that there is not a logical volume to extend. Only lvm to be created when creating a disk.... but it seems that lvm(metad ? ) is not able to understand that.
Easiest way would be to destroy the VG and recreate it but it sound a bit hard to me :)
 
There must be some confusion then.

and as you can see, the free space has been "seen" from the sdc side
We don't see that, as you did not include that output (lsscsi -ss ; lsblk /dev/sdc)
Now I have expanded the LUN from 20G to 40G and this can be seen from vgs
Thats not what your output appears to show:
VG Size: 19.99 GiB - The total size of the volume group.
PE Size: 4.00 MiB - Physical Extent (the smallest allocatable unit) is 4 MiB in size.
Alloc PE / Size: 0 / 0 - No PEs have been allocated yet, meaning no Logical Volumes (LVs) have been created in this VG
Free PE / Size: 5118 / 19.99 GiB - All 5118 PEs (5118*4=19GiB) are available for allocation.

VGs are _always_ based on PVs. Do a pvs/pvdisplay to see them. You can place PV directly on raw device, i.e. /dev/sdc. Most people partition the /dev/sdc first, meaning that you have to extend the partition first.

Good luck!


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: Johannes S
I think you are still confused.
- You said that you are working with /dev/sdc
- You said that you are working vg-iscsi implying that it is based on /dev/sdc

Your latest output shows :
sdb / 40G and it is the basis for vg-iscsi
sdc / 60G and it is the basis for vg-bck

You also misunderstand what "Free PE" means - they are already part of the VG, they are just not used. This makes sense as there is nothing on vg-iscsi, as confirmed by pvs
I would take a step back and carefully review everything again...


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
No, sorry. My fault. I've removed the post as I have taken the data after a reboot and son device name has changed.
I'm rebuild the environment as it was to start clean and will post as soon as I have all as before.
Sorry again.
 
Here we are.
Freshly attached iscsi lun 20G, as sdc, then created an iscsi lun from Datacenter->storage->add iscsi lun, and after that an LVM on this latest created iscsi -lun, via datacenter->storage->add lvm ( not thin ):
Code:
root@pve1:~# lsblk /dev/sdc
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdc    8:32   0  20G  0 disk
root@pve1:~# lsscsi -ss
[2:0:0:0]    disk    ATA      Crucial_CT250MX2 MU05  /dev/sda   232GiB
[4:0:0:4]    disk    TrueNAS  iSCSI Disk       0123  /dev/sdb   60.0GiB
[5:0:0:5]    disk    TrueNAS  iSCSI Disk       0123  /dev/sdc   20.0GiB
root@pve1:~# lsblk /dev/sdc
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdc    8:32   0  20G  0 disk
root@pve1:~# pvs
  PV         VG                                        Fmt  Attr PSize   PFree 
  /dev/sda3  pve                                       lvm2 a--  <34.00g  <4.13g
  /dev/sda4  ceph-fb250de9-ebf5-4ea5-b438-24d22440fdf4 lvm2 a--  <70.00g      0
  /dev/sdb   vg-bck                                    lvm2 a--   59.99g 504.00m
  /dev/sdc   vg_iscsi                                  lvm2 a--   19.99g  19.99g
root@pve1:~# vgdisplay vg_iscsi
  --- Volume group ---
  VG Name               vg_iscsi
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               19.99 GiB
  PE Size               4.00 MiB
  Total PE              5118
  Alloc PE / Size       0 / 0   
  Free  PE / Size       5118 / 19.99 GiB
  VG UUID               W73Eyj-UJMq-RMqX-8G9P-569q-c7gZ-sWb2CC
When creating a lun and moreover an lvm over this from GUI you cannot partition the device, so it will be used only the whole partition.
Btw, that's the result from the storage view of one of the nodes:

1738247902434.png

and this from the Disk-lvm of the same host:
1738249306529.png

vg_iscsi is 20G.


Now, I'm going to change lun size from the NAS, and linux kernel see this without any intervention:
Code:
root@pve1:~# dmesg 
[18579.386149] sd 5:0:0:5: Capacity data has changed
[18579.786623] sd 5:0:0:5: [sdc] 62914560 512-byte logical blocks: (32.2 GB/30.0 GiB)
[18579.786629] sd 5:0:0:5: [sdc] 16384-byte physical blocks
[18579.788246] sdc: detected capacity change from 41943072 to 62914560

let's see what lsscsi/lsblk/vgdisplay sees:
Code:
root@pve1:~# lsscsi -ss
[2:0:0:0]    disk    ATA      Crucial_CT250MX2 MU05  /dev/sda   232GiB
[4:0:0:4]    disk    TrueNAS  iSCSI Disk       0123  /dev/sdb   60.0GiB
[5:0:0:5]    disk    TrueNAS  iSCSI Disk       0123  /dev/sdc   30.0GiB
root@pve1:~# lsblk /dev/sdc
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdc    8:32   0  30G  0 disk 
root@pve1:~# pvs
  PV         VG                                        Fmt  Attr PSize   PFree  
  /dev/sda3  pve                                       lvm2 a--  <34.00g  <4.13g
  /dev/sda4  ceph-fb250de9-ebf5-4ea5-b438-24d22440fdf4 lvm2 a--  <70.00g      0 
  /dev/sdb   vg-bck                                    lvm2 a--   59.99g 504.00m
  /dev/sdc   vg_iscsi                                  lvm2 a--   19.99g  19.99g
root@pve1:~# vgdisplay vg_iscsi
  --- Volume group ---
  VG Name               vg_iscsi
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               19.99 GiB
  PE Size               4.00 MiB
  Total PE              5118
  Alloc PE / Size       0 / 0   
  Free  PE / Size       5118 / 19.99 GiB
  VG UUID               W73Eyj-UJMq-RMqX-8G9P-569q-c7gZ-sWb2CC
so lsscsi -ss and lsblk have seen the capacity change.
pvs haven't nor vgdisplay.
Also from the GUI no change ( that's expected as pvs hasn't change ).
Also a pvscan --cache didn't help:
Code:
root@pve1:~# pvscan --cache
  pvscan[180550] PV /dev/sda3 online.
  pvscan[180550] PV /dev/sda4 online.
  pvscan[180550] PV /dev/sdb online.
  pvscan[180550] PV /dev/sdc online.
root@pve1:~# pvs
  PV         VG                                        Fmt  Attr PSize   PFree  
  /dev/sda3  pve                                       lvm2 a--  <34.00g  <4.13g
  /dev/sda4  ceph-fb250de9-ebf5-4ea5-b438-24d22440fdf4 lvm2 a--  <70.00g      0 
  /dev/sdb   vg-bck                                    lvm2 a--   59.99g 504.00m
  /dev/sdc   vg_iscsi                                  lvm2 a--   19.99g  19.99g

Back to my original question: How can I make use of this added space ?

Thanks again.

Pigi_102
 

Attachments

  • 1738249417120.png
    1738249417120.png
    88.4 KB · Views: 1
  • Like
Reactions: Pigi_102
And a very special thank !
That did the job !
Code:
root@pve1:~# pvresize /dev/sdc
  Physical volume "/dev/sdc" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized
root@pve1:~# pvs
  PV         VG                                        Fmt  Attr PSize   PFree  
  /dev/sda3  pve                                       lvm2 a--  <34.00g  <4.13g
  /dev/sda4  ceph-fb250de9-ebf5-4ea5-b438-24d22440fdf4 lvm2 a--  <70.00g      0 
  /dev/sdb   vg-bck                                    lvm2 a--   59.99g 504.00m
  /dev/sdc   vg_iscsi                                  lvm2 a--   29.99g  29.99g
root@pve1:~# vgdisplay vg_iscsi
  --- Volume group ---
  VG Name               vg_iscsi
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               29.99 GiB
  PE Size               4.00 MiB
  Total PE              7678
  Alloc PE / Size       0 / 0   
  Free  PE / Size       7678 / 29.99 GiB
  VG UUID               W73Eyj-UJMq-RMqX-8G9P-569q-c7gZ-sWb2CC
1738250530939.png

Thanks again !
Pigi_102
 
  • Like
Reactions: ghusson
@bbgeek17 : yes, normally you should use pvresize, then pvresize.
I assume you meant "vgextend" as the last step^.

"pvresize" is wired so that the VG above the PV is updated automatically (in most cases). The "vgextend" is meant for concatenating brand new PV into an existing VG, a very different operation.

Cheers.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox