Proxmox cluster on a shared SAN storage

tommisan

Renowned Member
Dec 9, 2014
36
0
71
Hi everyone,

I am currently using a 4 node Proxmox 3.4 cluster on a NFS storage, mainly used for web applications.
I would like to expand the cluster and above all use it on a SAN storage.

The problem is that I do not find a lot of info (manuals, books, forum, etc.) on how to develop a SAN shared storage accessible by Proxmox, that would allow us to live migrate vms on different nodes and enjoy all the benefits of a shared storage we know.

The shared storage solutions that I found involve the use of:
  • a clustered filesystem (ocfs2?) that many people advise not to use;
  • iSCSI with LVM, snapshot feature missing and iSCSI unnecessary layer to a SAN environment;
  • distributed filesystem (Ceph, GlusterFS, etc), very interesting but unfortunately not applicable to SAN.
I would appreciate if you could give me some info and details on how to realise a shared storage for Proxmox cluster on the SAN storage.

Thank you very much in advance
 
1 is not working without a blockstorage, 2 is a blockstorage and 3 is not a SAN.

Your only choice is using an iSCSI or a FC-based SAN to get a blockstorage and then (thick) LVM on top of it. You can use GlusterFS or OCFS2 but that are filesystems, whereas LVM is a block storage. Depending on your current infrastructure, you can use iSCSI if you already have 10 GBE, if not and budget is tight, you can just buy used-parts e.g. 4 GB FC adapter for 20 euros, or a used 8 GB for roughly 180.

I'd suggest you install FreeNAS or OpenFiler as an appliance on a test machine and try for yourself with iSCSI. That costs nothing, only time.
 
Just some brief Information concerning my environmental needs before.
I don't use the HA-Features for VMs running on the nodes.
My goal was just having a shared SAN-Volume to perform manual migration (online and offline).

I performed the following steps to get my 3-Node-Cluster up and running to my needs.
- Baseinstallation (4.1)
- Installed multipath to recognize the SAN-Volumes on the nodes (apt-get install multipath-tools)
- Installed clvm-extension to LVM and gfs2-Filesystem (apt-get install clvm gfs2-utils) on the nodes
- to load DLM into the kernel insert dlm into /etc/modules)
- setup the Cluster (pvecm create ....., pvecm add .....)
- enable clusterlogging vo LVM (lvmconf -enable-cluster)
- at first I manual started dlm_controld and clvmd from the shell
- check that dlm is loaded and above services are running!
- Setup LVM to use the SAN-Volume on one of the nodes (pvcreate /dev/mapper/......., vgcreate -cy ..., lvcreate ......)
- install clusterabel Filesystem gfs2 onto the logical volume (mkfs.gfs2 -p lock_dlm -t ..... -j3 ....)
- edit /etc/fstab (on every node) to mount the logical volume (/dev/....../.... /mnt/... gfs2 defaults,noatime,nodiratime 0 0)
- add the storage in Proxmox-Web-GUI as directory (tried LVM first, but this didn't work)
- enable dlm and lvm2-cluster-activation services, as they are not enabled by default (systemctl enable ...)

Two error messages concerning LVM2 (early-activation and activation) during boot came up.
I disabled lvm2-activation services to avoid the "LVM2-activation failed", because despite the Message all LVMs where accessible.
The "LVM2-early-activation failed" I have to look for .....

I upgraded to 4.4 and the Cluster is still working.

If you got it running, let me know.
 
Just some brief Information concerning my environmental needs before.
I don't use the HA-Features for VMs running on the nodes.
My goal was just having a shared SAN-Volume to perform manual migration (online and offline).

I performed the following steps to get my 3-Node-Cluster up and running to my needs.
- Baseinstallation (4.1)
- Installed multipath to recognize the SAN-Volumes on the nodes (apt-get install multipath-tools)
- Installed clvm-extension to LVM and gfs2-Filesystem (apt-get install clvm gfs2-utils) on the nodes
- to load DLM into the kernel insert dlm into /etc/modules)
- setup the Cluster (pvecm create ....., pvecm add .....)
- enable clusterlogging vo LVM (lvmconf -enable-cluster)
- at first I manual started dlm_controld and clvmd from the shell
- check that dlm is loaded and above services are running!
- Setup LVM to use the SAN-Volume on one of the nodes (pvcreate /dev/mapper/......., vgcreate -cy ..., lvcreate ......)
- install clusterabel Filesystem gfs2 onto the logical volume (mkfs.gfs2 -p lock_dlm -t ..... -j3 ....)
- edit /etc/fstab (on every node) to mount the logical volume (/dev/....../.... /mnt/... gfs2 defaults,noatime,nodiratime 0 0)
- add the storage in Proxmox-Web-GUI as directory (tried LVM first, but this didn't work)
- enable dlm and lvm2-cluster-activation services, as they are not enabled by default (systemctl enable ...)

Two error messages concerning LVM2 (early-activation and activation) during boot came up.
I disabled lvm2-activation services to avoid the "LVM2-activation failed", because despite the Message all LVMs where accessible.
The "LVM2-early-activation failed" I have to look for .....

I upgraded to 4.4 and the Cluster is still working.

If you got it running, let me know.

You do not need any of the clvm stuff anymore, if you only manage the volumes with the GUI. This is a new feature of Proxmox VE 4
 
As advised, I developed a small virtual environment with a 3 node Proxmox cluster and a FreeNAS 9.3.

FreeNAS is configured as iSCSI target (40GB zvol) which is connected to the Proxmox cluster (not LUNs directly).

On the top of it there is LVM. The 40GB zvol is automatically “formatted” with the help of the web gui as a shared PV (i.e. /dev/sdb), VG (i.e. vg-zvol).
When you create a vm the VG is finally sliced into LVs (i.e. vm-100-disk-1).

What I don’t completely understand is the way Proxmox manages the mount/umount of the logical volumes.
If you simulate a live migration the result is a change of attributes of the logical volumes (lvs displays a change from -wi------- to -wi-ao---- and viceversa).

Next step will be the test on a FC SAN environment in which I will avoid the iSCSI layer, which I don’t want to use (and also I haven’t at disposal), presenting the SAN volumes (and so the VGs) to the hosts.

Thank you for your answers
 
FC and iSCSI SAN behave the same, they both go through the scsi layer of the kernel, therefore you will have also devices sd*. If you use multipath (which can also be done with iSCSI), you have to configure that beforehand.
 
You do not need any of the clvm stuff anymore, if you only manage the volumes with the GUI. This is a new feature of Proxmox VE 4
Where is the feature hidden? Directly use LVM within the WEB-GUI?
 
Where is the feature hidden? Directly use LVM within the WEB-GUI?

in the PVE code :p if you use our storage plugins, we will deactivate the logical volumes before moving a guest to a different node. no need to configure anything on your part, no need for clvm or gfs2 for VM disks.
 
in the PVE code :p if you use our storage plugins, we will deactivate the logical volumes before moving a guest to a different node. no need to configure anything on your part, no need for clvm or gfs2 for VM disks.
So the only thing is to setup the Physicalvolume, Volumegroup and Logicalvolume for the SAN-Volume on one of the nodes and then just add it as LVM Storage in the Web-GUI?
 
Thank you for the hints. I'll try to attach the shared SAN without the additional Stuff just via the Web-GUI.

The above mentioned Setup came up because I received the Answer twice, during a Training and a Exhibition that SAN wouldn't be supported. The use of CEPH woud be prefered..... Although Proxmox in general is really well documented, I couldn't find the Information on how to attach SAN-Volumes successfully (might be caused by my english to be improved, sorry for that ;-)
 
SAN (both over FC and over iSCSI) are perfectly working for ages in Linux and therefore also in Proxmox VE. DRBD is technically very similar to the SAN approach, because if provides shared scsi devices on which you can work with LVM or any "real" cluster filesystem ("real" with respect to physically identical backend storage).

Where did you train or heared that? Sometimes, available technology like linux internal software raid (mdadm) is not supported or covered by Proxmox VE Subscriptions, but it works nevertheless. I cannot remember a time when a "real" shared storage was not supported. This is the most natural way to do clustering and probable the oldest as well.
 
Training was held by an official Partner. Exhibition accompanied by some Proxmox-Guys in Germany.

Back to Topic.
Deleted the logical volume.
Disabled services for dlm and lvm2-cluster-activation.
Enabled lvm-activation service.
Unloadel modul dlm.
Changed locking_type in /etc/lvm/lvm.conf back to default "1".

After reboot neither volumegroup (vgdisplay) nor physical volume (pvdisplay) is available.
Both show just the specific Information of the Proxmox-Environment (VG Name pve ...).
Additionally only:
"Skipping clustered volume group SAN-volgrp
Cannot process volume group SAN-volgrp"
As there is no "existing Volume-group" shown within the Web-GUI, it's not possible to add the Storage as LVM.

So it seems that some additional stuff has to be done to make the shared Storage available within the Web-Gui.
 
Yes, you need to change the lvm back to a non-clustered version. This is done via changing the cluster flag on the volume group:

Code:
vgchange -c n <vg-group>
 
Jeep, that was helpful. Thought "-c" had to be set to share the storage with the other nodes.

So it's "just":
- Baseinstallation
- Install multipath to recognize the SAN-Volumes on all nodes (apt-get install multipath-tools)
- setup the Cluster
  • pvecm create .....
  • pvecm add .....)
- Setup volumegroup for the SAN-Volume on one node:
  • pvcreate /dev/mapper/.......
  • vgcreate <volumegroup> /dev/mapper...
- add the storage as LVM in Proxmox-Web-GUI (with option "shared" checked)

Nothing more to say... except "thank you" ;-)
 
  • Like
Reactions: moretti and A71
Nice that it works now!

One simple remark:
- Install multipath to recognize the SAN-Volumes on all nodes (apt-get install multipath-tools)

This is only necessary if you have a multipathed environment, which is preferred with respect to HA, but it works also without it on single-pathed setups. In the end, the only difference to iSCSI is that you do not need to login into your iSCSI targets, because FC works out of the box.
 
Thanks Maxm,
without the "shared" option checked at LVM (thick) storage level, migration ends up with this error message:

starting migration of VM 100 to node 'proxmox002' (10.xx.xx.xx)
found local disk 'lvm_1:vm-100-disk-1' (in current VM config)
can't migrate local disk 'lvm_1:vm-100-disk-1': can't live migrate attached local disks without with-local-disks option
ERROR: Failed to sync data - can't migrate VM - check log
aborting phase 1 - cleanup resources
ERROR: migration aborted (duration 00:00:01): Failed to sync data - can't migrate VM - check log
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!