HA with lvm and gfs2, shared bind mounts

nonton

Active Member
Sep 11, 2017
1
0
41
39
Hello,

I'd install proxmox v5 with HA + Fibre Channel multipath. For now there are two nodes (shoulde be 3), but third node will be added in this week.

My target is:
shared storage for containers (bind mounts)
Containers should be failover (one dedicated LUN) based on proxmox storage model LVM with locking pve mechanism and it's ok.

Why:
For example I have php7.0 container on node1, php5.6 on node2 and php7.1 on node3. All of them have apache2.
I don't want migrate folders (with php code and data) from php5.6 to php7.0.
I would like only move virtualhosts config to apache on other container.
With bind mounts I cane share for example WWW folder with php code and data to all containers on diffrent nodes.

I don't have big storage and I can't always migrate with rsync specified directory contain WWW files between containers and nodes.
With this solution I don't need planned how many storage I need for php5.6, 7.0 or 7.1, and then resize it or worst reduce.

LUN created on SAN (Fibre Channel) should be available and accessible from all nodes (done)
Now I wondering about share it filesystem (gfs2) on logical volume to all node.

Description:
I installed DLM (dlm-controld), created LVM (not clvm) and format it with gfs2.
On first node this LVM is active, and on the others I had to activate manually.
I had one problem after freshly created gfs2, reboot one node1 cause inacessible filesystem on the others.
Code:
commands df -h, ls, dd, touch etc. not working and hang up bash
After reboot all of them situasion seem be ok.
Now reboot one of them not affect to the rest. Maybe reload pve cluster would be enough.


I added entry to fstab based on UUID, after reboot node I must mount manually filesystem with command 'mount -a', why?
Code:
UUID=6fabd54e-184d-2155-1de6-a9f02f1a937f /mnt/WWW gfs2 noatime,_netdev 0 0

I can't created CLVM because of error message which I found on the forum, but without responses, because purposes were diffrent.
Code:
  connect() failed on local socket: No such file or directory
  Internal cluster locking initialisation failed.
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.

There is no systemctl clvmd service, only command clvmd.

I heard that gfs2 should be on LVM, not on raw block disk.

Questions:
Why LVM filesystem isn't mounted after reboot?
Can I stay with LVM (not CLVM) with gfs2, is it safe?
Maybe there is another solution?

Edit: 2017-09-18
CLVM need :
systemctl enable lvm2-cluster-activation
locking_type 3 in /etc/lvm/lvm.conf
vgchange -c y VOLUME_GROUP

We can create script in /etc/init.d to mount gfs2 lvm as directory. GFS2 must be umount during reboot or shutdown (umount -a -t gfs2 or leave entry in fstab).




best regards,
nonton
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!