Francesco M. Taurino

Renowned Member
Jan 29, 2016
27
7
68
48
Hi all,

I've two servers directly connected to a dot-hill fiber channel array.
I'd like to setup this storage as clustered lvm and NOT with a clustered
filesystem like ocfs or gfs, exposing all luns to all servers.

Some questions:
1- is it possible?
2- if yes, I've to use lvm (creating vg on a node, mark this as shared, and scan
the storage subsystem on the second node) OR I've to use clvm? In this case,
this should be marked as shared or is automatic?
3- in lvm or clvm scenario, creating a new logical volume on a node "propagates" the
changes on the second node?
4- in a shared lvm setup I'll lose the snapshot feature, but it's not clear to me if this will
affect also the backup "snapshot" mode. if snapshot backup will be unavailable, any
suggestion on speeding up the backup procedure?
5- if lvm or clvm are not viable options, do you recommend or have experience with ocfs2?
stability, speed, snapshots and so on will work as good as on ext4/xfs?

Many thanks,

Francesco
 
Last edited:
LVM on top of the shared LUN is the recommended solution. There is no need to use clvm as long as you use pve tools to manage your storage.

Note: LVM does not provide snapshot feature, but LVM-thin does not work on shared storage.
 
Hi Francesco, just to confirm, I've got a client with this deployed config and it works just fine. Single disk array with 2 x Fibre ports off the raid controller; 2 proxmox hosts are connected. Proxmox at bare metal / Debian level when fibre ports are cabled you can see the link up messages in dmesg. Then it is just a matter of configuring the shared storage target (LVM, as Dietmar suggests) - mark it as 'shared' type; and you will need to add it manually (via web GUI admin) on each of the 2 hosts. Then Vms you create backed on this storage will be live-migration capable from Host1<>Host2 with ~very little time for the 'migration'. (Ram sync / snap mainly).

I believe my client has older version of proxmox still for this config / and is using an external iSCSI target as the 'sanity check for HA quorum' - config that isn't specifically supported on latest proxmox I believe (ie, 2 node cluster HA support). I think you can just do a non-HA Cluster (ie, for live migration and centralized management, of both proxmox nodes from either/single node) without any issue though on Proxmox.latest

Tim
 
thanks you guys.
it seems that my questions no 1 and 2 have been answered.

anyone in there that can give me advice on the other 3 questions?

francesco
 
Hi, I think I answered your question 3 in my post?

ie,
- you must manually configure the SAN Backed LVM storage, from each proxmox node. There is not any 'automatic propigation' of shared storage from one Proxmox node to other nodes, for any (!) of the shared storage types, as far as I am aware (ie, iSCSI, etc)

ie, you basically

- setup on first proxmox node
- then go to second node,
- only step you won't have to repeat, is the creation of the LVM on the SAN Disk volume. But you do need to 'add' the storage into the proxmox node. Flag it as type = shared with the check-box option. Then it comes online. You must rinse and repeat this config process on all nodes who need access to the shared storage.


Tim
 
oh and footnotes.

- I believe on the client I've got deployed with shared LVM, I do have VM Snapshot as part of VM backups working. Not 100% sure though. My vague recollection is "that it just works as expected".
- no experience with ocfs2 so cannot comment.
- no comment on other things if I missed anything else.

T
 
thank you fortechsolutions.

I've experience with lvm and clvm, but the simple lvm setup for proxmox at first sight seems a bit strange to me.
but after some googling and thanks to you and this forum, now it's more clear. if I add a new logical volume on a
shared fc lun on a single node, the lvm subsystem in the second node is not aware of this new "slice" of storage.
perhaps proxmox keep lvm changes in sync to other nodes via its cluster fs. I'll test in a few days and post the
results.
 
Hi all,
time has come to setup the new cluster, with two servers connected via fiber channel
to the same array. now only one node has access to the luns, and all guest are working
fine. Just before enabling the second node to access the luns on the array, I've read this
page

http://www.tldp.org/HOWTO/LVM-HOWTO/sharinglvm1.html

which claims that before ANY change on lvm metadata (say, creating a new guest
imply to create a new logical volume), other nodes MUST issue the "vgchange -an"
command to close the volumes. This means stop (or migrate) all running guests on
"secondary" nodes (think "primary" node as the first and only node which change
lvm metadata).

Before going in production, please, share your experience and give me some advice.

Many, many, many thanks and have a nice week end!
 
Hi all,
time has come to setup the new cluster, with two servers connected via fiber channel
to the same array. now only one node has access to the luns, and all guest are working
fine. Just before enabling the second node to access the luns on the array, I've read this
page

http://www.tldp.org/HOWTO/LVM-HOWTO/sharinglvm1.html

which claims that before ANY change on lvm metadata (say, creating a new guest
imply to create a new logical volume), other nodes MUST issue the "vgchange -an"
command to close the volumes. This means stop (or migrate) all running guests on
"secondary" nodes (think "primary" node as the first and only node which change
lvm metadata).

Before going in production, please, share your experience and give me some advice.

Many, many, many thanks and have a nice week end!
Hi,
don't worry about the access from two (or more nodes). PVE handle the locking with normal lvm!

You must only be sure, that all nodes, which have access to the lvm-storage are part of one pve-cluster!
(if not - like partly new installation of the cluster - you must look yourself to not start VMs, where the disks are open on another node).

I have done this with some clusters (FC+SAS).

Do simply do an "lvs" on both nodes, and you see which LVs are open on which node).

Udo
 
hi all,

the cluster is up and running.
I've had only some problems with lvm naming devices, solved with "preferred_names" in lvm.conf.
even new vm guests created on the first node, can be immediately migrated on the second server.
Just some doubts about lvmetad...
I hope this setup will not blow up my virtual servers over time.

So:
- setup first pv on one node, with exclusive access to fc lun
- import lvm storage in proxmox gui marking it as shared
- give access to "lvm enabled" lun to second server
- on the second server, be sure to have no errors with "pvs", "vgs" or "lvs" commands
- try to migrate some guests to second server

this setup is working with no problem at all for over a week now.

Francesco
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!