Add multiple iSCSI storage to a volume group?

bitbud

New Member
Dec 2, 2008
25
0
1
Via the web interface storage configuration form, I've created multiple iSCSI targets. I've then created an LVM Volume Group selecting a single target as the 'base storage' and assigned a 'volume group name'

How can I add the second iSCSI target to the volume group?

It appears the web interface does not support this, but it is possible to do it from the command line.

My question then is the process supported? Standalone, or clustered? As the details of the clustering functionality are not documented, I am uncertain how the LVM group information is moved between the clusters, although it appears that every node in the cluster mounts every iSCSI target defined on the master.

Will doing this cause problems? One of the advantages of using the LVM layer is the ability to add and remove multiple Physical Volumes, especially without incurring downtime.
 
Last edited:
It appears the web interface does not support this, but it is possible to do it from the command line.

You can do it using lvm tools (see lvm2 howto).

My question then is the process supported? Standalone, or clustered? As the details of the clustering functionality are not documented, I am uncertain how the LVM group information is moved between the clusters, although it appears that every node in the cluster mounts every iSCSI target defined on the master.

LVM group information is on share storage in that case, so it is not moved at all. Locking is done with our own lock protocol.


Will doing this cause problems? One of the advantages of using the LVM layer is the ability to add and remove multiple Physical Volumes, especially without incurring downtime.

My personal feeling is that you are better off using on PV per VG - it is easier to handle when one target is offline.
 
Thanks for the info Dietmar, you are always helpful.

True, while 1 to 1 PV to VG is easier to manage, there are huge benefits to having multiple PVs connected to a VG, such as:
- the ability to dynamically expand the size of the Volume Group by adding additional storage
- the ability to migrate from one storage location to another while online
- gain better performance on a VG by using multiple PVs, in this case, multiple targets

Perhaps something to consider in a future release, the ability to add multiple PVs to a VG in the web console, and the ability to manage the VG/ LV details from the console, such as moving extents from one PV to another, and assigning specific extents to specific LVs.

This would give even greater flexibility, and would now provide 'Storage Migration'.

So long as it is doable from the command line without breaking the system, I am fine as it is. Levels of user access control is more pressing.

Another question - does VZDump only backup based on the first VG assigned to a VM, or does it backup ALL storage associated with a VM?

My context is all concerning KVM VMs, BTW.

THanks
 
Last edited:
...

Another question - does VZDump only backup based on the first VG assigned to a VM, or does it backup ALL storage associated with a VM?

My context is all concerning KVM VMs, BTW.

THanks

vzdump takes all.
 
-the ability to dynamically expand the size of the Volume Group by adding additional storage

Usually the NAS/SAN can resize the iscsi target, so there are other ways to get the same result.

gain better performance on a VG by using multiple PVs, in this case, multiple targets

Really, why do you think so? You have done some benchmarks?

Perhaps something to consider in a future release, the ability to add multiple PVs to a VG in the web console

AFAIK there are also problem with write barriers when using multiple PVS.

- Dietmar
 
-the ability to dynamically expand the size of the Volume Group by adding additional storage

Usually the NAS/SAN can resize the iscsi target, so there are other ways to get the same result.

Unless your SAN is full, and you want to connect an additional SAN that you have available, for example.

- gain better performance on a VG by using multiple PVs, in this case, multiple targets

Really, why do you think so? You have done some benchmarks?
Multiple PVs, means multiple drives, multiple 'places' to read write i.e. better performance. For example, a strip across two drives is faster than a single drive.


- Perhaps something to consider in a future release, the ability to add multiple PVs to a VG in the web console

AFAIK there are also problem with write barriers when using multiple PVS.

A key feature of LVM is multiple PVs, and a great reason to use it. I am unaware of write barrier issues in LVM, as it doesn't support them AFAIK. Is this more an issue of how it is used in a cluster configuration?
 
A key feature of LVM is multiple PVs, and a great reason to use it. I am unaware of write barrier issues in LVM, as it doesn't support them AFAIK. Is this more an issue of how it is used in a cluster configuration?

I cant see any cluster related problems.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!