[SOLVED] Stray Mons stuck in cephfs mountpoint

jaxx

Active Member
Oct 11, 2017
19
0
41
Toulon, France
jaxx.org
Hi all !

I've been moving servers around (adding a new and removing one that was 'too far' network wise, waaaay better btw for those who know :) )

I've removed a node that had OSDs, the whole bang, and accounted as a mon and before that added a server with the same setup, acting as a mon as well.

Had a CephFS pool and storage added and mounted prior to that and noticed the IPs addressed by the /mnt/pve/cephfs haven't changed, and still shows the prior nodes

Code:
# mount | grep cephfs
10.137.99.1,10.137.99.10,10.137.99.2:/ on /mnt/pve/cephfs type ceph (rw,relatime,name=admin,secret=<hidden>,acl)

It still shows .1, .2, .10 and mon hosts though .1 has been removed from the cluster and a .3 added

Disabled and Re-enabled the mountpoint without any luck
Code:
[global]
mon_host = 10.137.99.2 10.137.99.10 10.137.99.3

and the three [mon.xxx] are correct

Performance doesn't seem to suffer (or ceph copes well with .1 missing) but I don't know where it's getting the old values from.

Any clue ?
Thanks
 
Last edited:
Was that machine ever rebootet? If not, then the old mount is still present. You can try to unmount it, should be remounted within a few seconds.
Make sure you have nothing that is still actively using it.
umount /mnt/pve/cephfs

or if it does not want to:
umount -l /mnt/pve/cephfs

Might also need the force -f flag.

Or if you can, reboot that node. That would be the cleanest way.
 
#facepalm

I would have thought disabling and re-enabling the share would have naturally taken care of the mount.
Code:
# mount | grep cephfs
10.137.99.1,10.137.99.10,10.137.99.2:/ on /mnt/pve/cephfs type ceph (rw,relatime,name=admin,secret=<hidden>,acl)
# umount /mnt/pve/cephfs
# mount | grep cephfs
10.137.99.10,10.137.99.2,10.137.99.3:/ on /mnt/pve/cephfs type ceph (rw,relatime,name=admin,secret=<hidden>,acl)

I'll reboot once I have a chance to do so :)

Thanks !
 
I would have thought disabling and re-enabling the share would have naturally taken care of the mount.
Nope, it just won't try to mount it, in case it is not. Actively unmounting it could break something, that still has something open.
 
Nope, it just won't try to mount it, in case it is not. Actively unmounting it could break something, that still has something open.

Well... I have a few containers with sub mounted MPs to the cephfs inside ... cut the little traffic there was to be safe and umount worked a charm even without stopping the containers (but yeah, I'll plan a reboot anyways)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!