Cant Get GFS2 filesystem to mount on reboot

Jan 16, 2022
193
8
23
37
hi all after spending 8h on google im asking help here !

i sucessfuly created a GFS2 file system on 3 PROXMOX 7.1

i need to manually mount on each reboot as the /etc/fstab is not waiting for the cluster to been up.
there is alot of example for PCS from redhat but nothing related to proxmox i can find that is working.

i have read that we can add a '' cluster ressource '' into the Proxmox cluster but i dont find anything related.

your help would be appreciated .
 
you can use systemd to order your GFS2 mount accordingly (either by adding the required options to /etc/fstab, or by using a systemd mount unit instead).

some candidates depending on your requirements, in order of coming up during boot:

pve-cluster.service (/etc/pve)
pve-storage.target (after pve-cluster, but before the other services -> used to wait for storage parts to come up)
pveproxy (API)
pve-guests.service (onboot guest starting)

systemctl list-dependencies might help ;)
 
hi Fabian thx for your return

here is what i i have found so far during boot.
dlm fail saying no local ip adress as been set




[ 11.719570] vmbr1: port 1(eno2) entered forwarding state
[ 12.285673] DLM installed
[ 12.301947] gfs2: GFS2 installed
[ 12.302625] gfs2: fsid=VCL1-MTL1:gfs2: Trying to join cluster "lock_dlm", "VCL1-MTL1:gfs2"
[ 12.302732] dlm: no local IP address has been set
[ 12.302758] dlm: cannot start dlm lowcomms -107
[ 12.303130] gfs2: fsid=VCL1-MTL1:gfs2: dlm_new_lockspace error -107
 
I never set up GFS2 - but ordering/dependency wise it needs to go after corosync, since it uses that for locking.
 
yeah, see man systemd-fstab-generator and man systemd.mount
 
  • Like
Reactions: gurubert
BTW: You also have to remove the $remote_fs dependency from /etc/init.d/rrdcached, otherwise you get a dependency cycle:

Code:
remote-fs.target -> rrdached.service -> pve-cluster.service
     ^                                         |
     |                                         V
gfs2.mount   <-      dlm.service  <-     corosync.service

rrdcached never writes to a remote filesystem, AFAIK.
 
@fabian do you think i can enable a secondary link to the Proxmox cluster ? from my memory i think i had read that corosync or dlm or i dont know what service was having issue handling 2 link to maintain GFS2 active.
if you have any advice before i do so. otherwise i can configure a Qdevice probably to use the secondary link but anyway if the first one fail the GFS2 will too..
 
@fabian i might be wrong.

correct me if im wrong DLM is using the cluster service from Proxmox ? so actually if i configure a Qdevice it may rely on it as it will maintain the corum from corosync ( pvecm ) ?
 
I have no practical experience whatsoever with GFS2. It's probably best to test it (including various failure scenarios) using a virtual cluster. Anything built on top of corosync that uses its quorum engine should take qdevices into account when determining whether the current partition is quorate.
 
I have no practical experience whatsoever with GFS2. It's probably best to test it (including various failure scenarios) using a virtual cluster. Anything built on top of corosync that uses its quorum engine should take qdevices into account when determining whether the current partition is quorate.
take qdevices into account out of the box you mean ? that would be great for sure
 
yes. should be easy to find out though in a test cluster ;)
 
  • Like
Reactions: DC-CA1

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!