Increasing LVM Size from 7 TiB to 15 TiB on Proxmox while VMs are Running

Ala

New Member
Oct 11, 2024
1
0
1
Hello everyone,

I hope everyone is doing well.

I'm currently managing a Proxmox cluster that is connected to Nimble storage via iSCSI. We have an LVM volume that is currently at 7 TiB, and we need to increase it to 15 TiB. I'd like to know if it's possible to perform this resize operation while the VMs are running. Here's an overview of our setup:

  • Current LVM Size: 7 TiB
  • Desired LVM Size: 15 TiB
  • Storage Backend: Nimble Storage connected through iSCSI
  • Proxmox Version: 8.3.2
i wanted to play it safe at first, buy creating a new volume in Nimble and then moving all the disks to the new added LVM, but some VMs have disks that i couldn't move, i get this error : TASK ERROR: storage migration failed: block job (mirror) error: drive-efidisk0: Source and target image have different sizes (io-status: ok).


root@Proxmox:~# vgdisplay
--- Volume group ---
VG Name vg-data
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 68
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 31
Open LV 11
Max PV 0
Cur PV 1
Act PV 1
VG Size <15.00 TiB #### this is the new LVM that i added to test "Move Disk" operation ###
PE Size 4.00 MiB
Total PE 3932159
Alloc PE / Size 1212160 / 4.62 TiB
Free PE / Size 2719999 / <10.38 TiB


--- Volume group ---
VG Name vg_data
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 116
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 50
Open LV 14
Max PV 0
Cur PV 1
Act PV 1
VG Size <7.00 TiB
PE Size 4.00 MiB
Total PE 1835007
Alloc PE / Size 1504537 / <5.74 TiB
Free PE / Size 330470 / 1.26 TiB


--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <446.10 GiB
PE Size 4.00 MiB
Total PE 114201
Alloc PE / Size 110104 / 430.09 GiB
Free PE / Size 4097 / 16.00 GiB

  1. has anyone performed a similar LVM resize while VMs were running? What was your experience?
  2. Are there any additional precautions that I should be aware of to minimize risk while performing this operation?
I appreciate any insights, suggestions, or experiences you can share. Thanks in advance!

Best regards,
 
Increasing lvm volume pool and the lvm volumes itself is easy done online possible both - and you don't need to move your images from the old to new lvm volume group. There are lots of threads in the forum for howto or refer to lvm manual or RH docu.
Best is to have an actual backup before doing so as you may observe any hw/sw problems while doing.
To your "move experiments": migrate disk and config file in shell if you want to.