[GFS2] Filesystem does not mount automatically after node reboot

kahla

New Member
Oct 1, 2025
3
0
1
Hello everyone,

We have a 6-node cluster with vSAN and we want to share datastores using GFS2.
The datastores are available through multipath LUNs.

So far, we managed to:

mount a datastore with GFS2 on all nodes,

create a VM on this datastore,

move this VM across all nodes in the cluster successfully.

The issue:
When we reboot a node, the GFS2 filesystem does not mount automatically at startup.

Has anyone experienced this issue before? Do you have any recommendations on what to check (for example fstab, systemd units, cluster services, or mount options) to ensure the filesystem mounts properly after reboot?

Thanks in advance for your help.
 
Hi... Here is a systemd unit that I use for ocfs2.
I hope you can use it to adapt to your needs

Code:
# /etc/systemd/system/data.mount
[Unit]
Description=Data mount
After=drbd.service
After=o2cb.service
After=ocfs2.service

[Mount]
What=/dev/mapper/mylun
Where=/data
Type=ocfs2
Options=_netdev,defaults


[Install]
WantedBy=multi-user.target

I called data.mount
The name of the systemd unit is important.
After create the unit run

Code:
systemctl daemon-reload
systemctl enable --now data.mount

That's it!

Oh and by the way, I use to change the pve-manager.service as well, to wait for the ocfs services.
I don't know if you need this but here we go:
Code:
[Unit]
Description=PVE guests
ConditionPathExists=/usr/bin/pvesh
RefuseManualStart=true
RefuseManualStop=true
Wants=pvestatd.service
Wants=pveproxy.service
Wants=spiceproxy.service
Wants=pve-firewall.service
Wants=lxc.service
After=pveproxy.service
After=pvestatd.service
After=spiceproxy.service
After=pve-firewall.service
After=lxc.service
After=pve-ha-crm.service pve-ha-lrm.service
# I add this two lines
After=o2cb.service ocfs2.service
After=data.mount

[Service]
Environment="PVE_LOG_ID=pve-guests"
ExecStartPre=-/usr/share/pve-manager/helpers/pve-startall-delay
ExecStart=/usr/bin/pvesh --nooutput create /nodes/localhost/startall
ExecStop=-/usr/bin/vzdump -stop
ExecStop=/usr/bin/pvesh --nooutput create /nodes/localhost/stopall
Type=oneshot
RemainAfterExit=yes
TimeoutSec=infinity

[Install]
WantedBy=multi-user.target
Alias=pve-manager.service

Cheers
 
Last edited:
  • Like
Reactions: alexskysilk
Hi,

Thanks for sharing your systemd unit for OCFS2. I tried adapting it for my environment with GFS2 on Proxmox using LVM over multipath, but unfortunately, it doesn’t work for me.

Even after following the naming conventions and setting the proper dependencies, systemd refuses to start the .mount unit with a “bad unit file setting” error. The main issue is that for GFS2 on multipath LVM, systemd .mount units often fail at boot because the device isn’t ready yet, and the cluster stack (dlm, corosync) isn’t fully initialized.

Cheers,
 
Here are the errors I see during reboot with only the fstab configuration:

[ 18.769090] dlm: no local IP address has been set
[ 18.769091] dlm: cannot start dlm midcomms -107
[ 18.769092] gfs2: fsid=CL-PVE-CO-DEV01:gfs2: dlm_new_lockspace error -107