LXC Containers with CephFS Mountpoints Fail to Start at Boot

BlueMatt

Renowned Member
May 19, 2012
8
0
66
Basically what the title says. I have a few LXC containers that have filesystem mount points pointing to CephFS filesystems to sync containers across hosts. Sadly, they always fail to start on boot (I assume cause CephFS is just slower to start than they are, they have no problem being started manually after boot).
 
Yea, its not really a big deal for me, easy to work around as you point out, more a bug report than a seeking-help post :)
 
To investigate the issue further, could you please check the Syslog during the LXC startup? The Syslog from the PVE host should give us more information and help us understand the root cause.
 
It seems the mount was just too late.
The host came up at 00:34:34:
Nov 12 00:34:34 rackbeast corosync[7218]: [QUORUM] This node is within the primary component and will provide service.
Nov 12 00:34:34 rackbeast corosync[7218]: [QUORUM] Members[4]: 1 2 3 4
Nov 12 00:34:34 rackbeast corosync[7218]: [MAIN ] Completed service synchronization, ready to provide service.

The container in question was the first thing it tried to start at 00:34:37:
Nov 12 00:34:37 rackbeast pve-guests[7642]: <root@pam> starting task UPID:rackbeast:00001DDB:00000E71:6732A29D:startall::root@pam:
Nov 12 00:34:37 rackbeast pvesh[7642]: Starting CT 1013
Nov 12 00:34:37 rackbeast pve-guests[7643]: <root@pam> starting task UPID:rackbeast:00001DDC:00000E73:6732A29D:vzstart:1013:root@pam:
Nov 12 00:34:37 rackbeast pve-guests[7644]: starting CT 1013: UPID:rackbeast:00001DDC:00000E73:6732A29D:vzstart:1013:root@pam:
Nov 12 00:34:41 rackbeast sh[7212]: Running command: /usr/sbin/ceph-volume lvm trigger 1-4cea8dc9-2d9a-43a2-91f8-c302faa33029
Nov 12 00:34:41 rackbeast sh[7220]: Running command: /usr/sbin/ceph-volume lvm trigger 7-4a0453c7-2776-4c38-84ac-0ade91bc2658
Nov 12 00:34:41 rackbeast sh[7206]: Running command: /usr/sbin/ceph-volume lvm trigger 0-2a9b8cf2-b802-4dda-b032-fe892191e517
Nov 12 00:34:43 rackbeast ceph-mds[7182]: starting mds.rackbeast-2 at
Nov 12 00:34:43 rackbeast ceph-mds[7199]: starting mds.rackbeast at
Nov 12 00:34:43 rackbeast pmxcfs[6923]: [status] notice: received log
Nov 12 00:34:43 rackbeast kernel: netfs: FS-Cache loaded
Nov 12 00:34:43 rackbeast kernel: Key type cifs.spnego registered
Nov 12 00:34:43 rackbeast kernel: Key type cifs.idmap registered
Nov 12 00:34:43 rackbeast kernel: CIFS: Attempting to mount //69.59.18.197/mnt
Nov 12 00:34:44 rackbeast kernel: Key type ceph registered
Nov 12 00:34:44 rackbeast kernel: libceph: loaded (mon/osd proto 15/24)
Nov 12 00:34:44 rackbeast kernel: rbd: loaded (major 251)
Nov 12 00:34:44 rackbeast kernel: libceph: mon0 (1)69.59.18.247:6789 session established
Nov 12 00:34:44 rackbeast kernel: libceph: client58221316 fsid 0481f941-d8e3-4f47-8a8c-7c531ae0a614
Nov 12 00:34:44 rackbeast kernel: rbd: rbd0: capacity 34359738368 features 0x3d
Nov 12 00:34:44 rackbeast systemd[1]: Created slice system-pve\x2dcontainer.slice - PVE LXC Container Slice.
Nov 12 00:34:44 rackbeast systemd[1]: Started pve-container@1013.service - PVE LXC Container: 1013.
Nov 12 00:34:44 rackbeast kernel: EXT4-fs (rbd0): mounted filesystem 2edfd69f-f74e-4a9d-bfdf-607d865a4259 r/w with ordered data mode. Quota mode: none.
Nov 12 00:34:44 rackbeast pve-guests[7644]: startup for container '1013' failed
Nov 12 00:34:45 rackbeast kernel: EXT4-fs (rbd0): unmounting filesystem 2edfd69f-f74e-4a9d-bfdf-607d865a4259.
Nov 12 00:34:45 rackbeast pvesh[7642]: Starting CT 1013 failed: startup for container '1013' failed

But /mnt/pve mounts don't start getting processed until 00:34:45:

Nov 12 00:34:45 rackbeast systemd[1]: Mounting mnt-pve-ceph_backups.mount - /mnt/pve/ceph_backups...

...and the mount we actually need isn't processed until 00:34:48:

Nov 12 00:34:48 rackbeast systemd[1]: Mounted mnt-pve-ceph_backups.mount - /mnt/pve/ceph_backups.
Nov 12 00:34:48 rackbeast systemd[1]: Mounting mnt-pve-cephfs.mount - /mnt/pve/cephfs...
Nov 12 00:34:49 rackbeast systemd[1]: Mounted mnt-pve-cephfs.mount - /mnt/pve/cephfs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!