ceph / cephfs feature level mismatch

Waschbüsch

Renowned Member
Dec 15, 2014
94
8
73
Munich
I experience the same issue mentioned here:
and here:
(both of which seem to have remained unanswered)

the kernel-driver for cephfs seems to be using a feature level of luminous while the rest of ceph is at squid release.
Running ceph features without cephfs mounted:


JSON:
{
    "mon": [
        {
            "features": "0x3f03cffffffdffff",
            "release": "squid",
            "num": 5
        }
    ],
    "mds": [
        {
            "features": "0x3f03cffffffdffff",
            "release": "squid",
            "num": 5
        }
    ],
    "osd": [
        {
            "features": "0x3f03cffffffdffff",
            "release": "squid",
            "num": 47
        }
    ],
    "client": [
        {
            "features": "0x3f03cffffffdffff",
            "release": "squid",
            "num": 297
        }
    ],
    "mgr": [
        {
            "features": "0x3f03cffffffdffff",
            "release": "squid",
            "num": 5
        }
    ]
}

and with cephfs mounted:

JSON:
{
    "mon": [
        {
            "features": "0x3f03cffffffdffff",
            "release": "squid",
            "num": 5
        }
    ],
    "mds": [
        {
            "features": "0x3f03cffffffdffff",
            "release": "squid",
            "num": 5
        }
    ],
    "osd": [
        {
            "features": "0x3f03cffffffdffff",
            "release": "squid",
            "num": 47
        }
    ],
    "client": [
        {
            "features": "0x2f018fb87aa4aafe",
            "release": "luminous",
            "num": 1
        },
        {
            "features": "0x3f03cffffffdffff",
            "release": "squid",
            "num": 297
        }
    ],
    "mgr": [
        {
            "features": "0x3f03cffffffdffff",
            "release": "squid",
            "num": 5
        }
    ]
}

Since I would like to make use of squid features (read balancer), I wonder if it is at all possible to use cephfs (which I use to provide ISO / templates across the cluster) and the new features at the same time? At the very least, the documentation might benefit from a note that using cephfs will (as of now) prevent use of some current ceph features.
 
Last edited: