Inconsistent ceph automounting between nodes

Reliant8275

New Member
Sep 4, 2023
20
2
3
I have three nodes in my cluster, each as a ceph mon, mgr, and two mds each. My problem comes when I try to mount on boot. I think the issue is with my /etc/fstab, as pasted below. I tried using the Proxmox storage but it was very inconsistent while fstab works much better.

This setup works to automount my two pools on boot.
Rich (BB code):
admin@.tank=/ /mnt/cephfs ceph mon_addr=192.168.1.5,_netdev,noauto,x-systemd.automount,noatime 0 0
admin@.plexbulk=/ /mnt/cephfs-plexbulk ceph mon_addr=192.168.1.5,_netdev,noauto,x-systemd.automount,noatime 0 0

But it didn't work on my latest node. I tried this which seemed to work but maybe only worked after I do a manual "mount -a" or "systemctl daemon-reload"
Code:
192.168.1.4:/     /mnt/cephfs    ceph    name=admin,fs=tank,_netdev,x-systemd.automount,x-systemd.mount-timeout=30s,noatime    0   0
192.168.1.4:/     /mnt/cephfs-plexbulk    ceph    name=admin,fs=plexbulk,_netdev,x-systemd.automount,x-systemd.mount-timeout=30s,noatime    0   0

Today it only mounted the second pool but not the first. Any ideas? In the end I want the mounts to survive a reboot so I don't to SSH in for anything in case of maintenance or a long power outage.
 
The real question is why does one mount and the other doesn't. They're running on the same bunch of drives, all SSDs. Maybe the timeout doesn't catch for tank but by the time plexbulk is requested it is available? I might try extending the timeout to a minute. I'd rather have a slow boot than no mount.