I experience the same issue mentioned here:
and here:
(both of which seem to have remained unanswered)
the kernel-driver for cephfs seems to be using a feature level of luminous while the rest of ceph is at squid release.
Running ceph features without cephfs mounted:
and with cephfs mounted:
Since I would like to make use of squid features (read balancer), I wonder if it is at all possible to use cephfs (which I use to provide ISO / templates across the cluster) and the new features at the same time? At the very least, the documentation might benefit from a note that using cephfs will (as of now) prevent use of some current ceph features.
Hello,
I am trying to put the balancer mode into upmap-read which requires to run:
I can't do so because I have a single client running on luminous compat version:
I am trying to put the balancer mode into upmap-read which requires to run:
ceph osd set-require-min-compat-client reefI can't do so because I have a single client running on luminous compat version:
Code:
# ceph features
{
"mon": [
{
"features": "0x3f03cffffffdffff",
"release": "squid",
"num": 1
}
],
"osd": [
{
"features": "0x3f03cffffffdffff",
"release": "squid",
"num": 8
}
],
"client": [
{
"features": "0x2f018fb87aa4aafe"...
- qontinuum
- ceph 19.2.1 pve 8.4.1
- Replies: 2
- Forum: Proxmox VE: Installation and configuration
I have installed cpeh 19.2 squid release and would like to use some new features.
Best I can tell erasure coding has 2 new features that look useful to me.
crush-num-failure-domains=
crush-osds-per-failure-domain=
I can't currently find how I learned about these but I think it may do some sort of custom crushmap rule when used.
When I used it to create an erasure coded pool
I set
Best I can tell erasure coding has 2 new features that look useful to me.
crush-num-failure-domains=
crush-osds-per-failure-domain=
I can't currently find how I learned about these but I think it may do some sort of custom crushmap rule when used.
When I used it to create an erasure coded pool
ceph osd pool create newpool erasure newprofileError EINVAL: new crush map requires client version squid but require_min_compat_client is luminousI set
ceph osd set-require-min-compat-client squid...- cfanta05
- Replies: 0
- Forum: Proxmox VE: Installation and configuration
the kernel-driver for cephfs seems to be using a feature level of luminous while the rest of ceph is at squid release.
Running ceph features without cephfs mounted:
JSON:
{
"mon": [
{
"features": "0x3f03cffffffdffff",
"release": "squid",
"num": 5
}
],
"mds": [
{
"features": "0x3f03cffffffdffff",
"release": "squid",
"num": 5
}
],
"osd": [
{
"features": "0x3f03cffffffdffff",
"release": "squid",
"num": 47
}
],
"client": [
{
"features": "0x3f03cffffffdffff",
"release": "squid",
"num": 297
}
],
"mgr": [
{
"features": "0x3f03cffffffdffff",
"release": "squid",
"num": 5
}
]
}
and with cephfs mounted:
JSON:
{
"mon": [
{
"features": "0x3f03cffffffdffff",
"release": "squid",
"num": 5
}
],
"mds": [
{
"features": "0x3f03cffffffdffff",
"release": "squid",
"num": 5
}
],
"osd": [
{
"features": "0x3f03cffffffdffff",
"release": "squid",
"num": 47
}
],
"client": [
{
"features": "0x2f018fb87aa4aafe",
"release": "luminous",
"num": 1
},
{
"features": "0x3f03cffffffdffff",
"release": "squid",
"num": 297
}
],
"mgr": [
{
"features": "0x3f03cffffffdffff",
"release": "squid",
"num": 5
}
]
}
Since I would like to make use of squid features (read balancer), I wonder if it is at all possible to use cephfs (which I use to provide ISO / templates across the cluster) and the new features at the same time? At the very least, the documentation might benefit from a note that using cephfs will (as of now) prevent use of some current ceph features.
Last edited: