Increasing volume size (lvm + iscsi).

zoid

New Member
Mar 23, 2012
10
0
1
Hi, I have a cluster of 5 nodes running proxmox 3.1 with KVM using LVM on a shared iscsi volume, provided by a dell md3200i.
I'm starting to run out of space in the volume group, but my dell storage still has unallocated space.
My question is: what's the best strategy to increase the capacity of the VG? I see 2 options:

1) Enlarge the storage volume, and then enlarge the PV that conatins the VG.
2) Create another storage volume, share it to all the hosts, create a new PV, and extend the VG to the new PV.

My concerns are regarding the propagation of the changes across the cluster.

Sorry if this has been asked before, I didn't find it.
 
You can also provide a new LUN which is used to increase the size of the VG. Benefits: No existing VM will notice anything. See vgextend
I.E new LUN mounted as /dev/sdd.

fdisk /dev/sdd and create a new partition type LVM2

So now you have /dev/sdd1 available for your volume group.
pvcreate /dev/sdd1
vgextend MyVG /dev/sdd1

Voila your VG will now have the increased size of /dev/sdd1.

To remove a device from a VG see
pvmove and vgreduce
 
You can also provide a new LUN which is used to increase the size of the VG. Benefits: No existing VM will notice anything. See vgextend
I.E new LUN mounted as /dev/sdd.

fdisk /dev/sdd and create a new partition type LVM2

So now you have /dev/sdd1 available for your volume group.
pvcreate /dev/sdd1
vgextend MyVG /dev/sdd1

Voila your VG will now have the increased size of /dev/sdd1.

To remove a device from a VG see
pvmove and vgreduce

That's what I was trying to say with approach #2.
 
That's what I was trying to say with approach #2.
Hi,
depends on your config.

Approach # 2 is the easy way.

#1 is ok (or better), if you use the whole disk for lvm, without partitiontable, then an pvresize; vgextend is enough. If you have an partitiontable, you must remove the used partition and create an new one (which is bigger).
But the partitiontable is in use and the kernel use the old one until reboot (ok, you can do something with partx...). To remove an used partion is not the best thing (the beginning of the new partition must be the same - switch to sector-view).

Udo
 
Mir, Udo, thanks for your quick responses.

Hi,
depends on your config.

Approach # 2 is the easy way.

#1 is ok (or better), if you use the whole disk for lvm, without partitiontable, then an pvresize; vgextend is enough. If you have an partitiontable, you must remove the used partition and create an new one (which is bigger).
But the partitiontable is in use and the kernel use the old one until reboot (ok, you can do something with partx...). To remove an used partion is not the best thing (the beginning of the new partition must be the same - switch to sector-view).

Udo

I don't think I'm using a parition table on that volume.

Code:
root@proxmox4:~# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/mapper/36d4ae520007a727d00000b8e50a37dd1
  VG Name               storage02-cluster2-lvm
  PV Size               200.00 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              51199
  Free PE               7679
  Allocated PE          43520
  PV UUID               e4Zb2j-SRd1-pxac-LG69-SnGR-WiUB-39Q2FR

root@proxmox4:~# fdisk -l /dev/mapper/36d4ae520007a727d00000b8e50a37dd1

Disk /dev/mapper/36d4ae520007a727d00000b8e50a37dd1: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders, total 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000                                                                                                                                                                                                                  
                                                                                                                                                                                                                                             
Disk /dev/mapper/36d4ae520007a727d00000b8e50a37dd1 doesn't contain a valid partition table

In that case, in which node should I run the pvresize? It is important? Shoud I do anything on the other nodes to make the change visible?

Kind regards.
 
Since all nodes see the same PV it should not matter which node you choose to make the changes from but I think you would need to instruct each node to rescan the scsi bus - 'echo "- - -" > /sys/class/scsi_host/hostX/scan' where X denotes the host number.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!