Upgrade Ceph to Luminous problem

After running the upgrade to Luminous my data pool seems to be gone, but is still accessible

Code:
root@nod2:~# ceph status
  cluster:
    id:     d13548c9-2763-4d87-bf30-27de2be235fd
    health: HEALTH_WARN
            crush map has straw_calc_version=0
            no active mgr

  services:
    mon: 3 daemons, quorum 2,1,0
    mgr: no daemons active
    osd: 8 osds: 8 up, 8 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage:   0 kB used, 0 kB / 0 kB avail
    pgs:

When entering,

ceph osd require-osd-release luminous

as the upgrade guide suggests I get the following error,

Code:
root@nod2:~# ceph osd set-require-min-compat-client jewel
Error EPERM: cannot set require_min_compat_client to jewel: 3 connected client(s) look like hammer (missing 0x400000000000000); add --yes-i-really-mean-it to do it anyway

Also trying to set tunables to optimal says something similar,

Code:
root@nod2:~# ceph osd crush tunables optimal
Error EINVAL: new crush map requires client version jewel but require_min_compat_client is firefly

All servers have been upgraded and ceph monitors and osd's restarted.
What is wrong?
 
your health status also shows:

> no active mgr

its recommended to have on every monitor host also a ceph manager (mgr). for new setups, this will be automatically created, if you do an upgrade you have to create the mgr manually on each monitor host.

> pveceph createmgr

see also:
http://docs.ceph.com/docs/luminous/mgr/
 
your health status also shows:

> no active mgr

its recommended to have on every monitor host also a ceph manager (mgr). for new setups, this will be automatically created, if you do an upgrade you have to create the mgr manually on each monitor host.

> pveceph createmgr

see also:
http://docs.ceph.com/docs/luminous/mgr/

when running above command i get

Code:
root@nod1:~# pveceph createmgr
creating manager directory '/var/lib/ceph/mgr/ceph-nod1'
creating keys for 'mgr.nod1'
unable to open file '/var/lib/ceph/mgr/ceph-nod1/keyring.tmp.24089' - No such file or directory

What could this be?

EDIT:

Had to install ceph-mgr (apt-get install ceph-mgr)
 
Last edited:
The 3 connected clients error seemed to be a rados df command from Proxmox itself, after killing that the command worked.
But how come ceph doesn't see my pools anymore?

what have you done to fix this ?

i get the same error on a 8 node cluster :

Error EPERM: cannot set require_min_compat_client to jewel: 6 connected client(s) look like hammer (missing 0x400000000000000); add --yes-i-really-mean-it to do it anyway

Code:
# ceph features
{
    "mon": {
        "group": {
            "features": "0x1ffddff8eea4fffb",
            "release": "luminous",
            "num": 7
        }
    },
    "mds": {
        "group": {
            "features": "0x1ffddff8eea4fffb",
            "release": "luminous",
            "num": 2
        }
    },
    "osd": {
        "group": {
            "features": "0x1ffddff8eea4fffb",
            "release": "luminous",
            "num": 88
        }
    },
    "client": {
        "group": {
            "features": "0x106b84a842a42",
            "release": "hammer",
            "num": 6
        },
        "group": {
            "features": "0x40107b86a842ada",
            "release": "jewel",
            "num": 5
        },
        "group": {
            "features": "0x1ffddff8eea4fffb",
            "release": "luminous",
            "num": 7
        }
    }
}


Code:
# ceph status
  cluster:
    id:     c5f7xyz
    health: HEALTH_WARN
            noout flag(s) set

  services:
    mon: 7 daemons, quorum 4,3,0,1,2,5,6
    mgr: srv3xyz(active), standbys: srv1xyz, srv4xyz, srv3xyz, srv3xyz, srv1xyz, srv4xyz
    mds: cephfs-1/1/1 up  {0=0=up:active}, 1 up:standby
    osd: 88 osds: 88 up, 87 in
         flags noout

  data:
    pools:   8 pools, 3296 pgs
    objects: 26864k objects, 93659 GB
    usage:   187 TB used, 307 TB / 494 TB avail
    pgs:     3293 active+clean
             3    active+clean+scrubbing+deep

  io:
    client:   1149 kB/s wr, 0 op/s rd, 288 op/s wr

# ceph osd set-require-min-compat-client jewel
Error EPERM: cannot set require_min_compat_client to jewel: 6 connected client(s) look like hammer (missing 0x400000000000000); add --yes-i-really-mean-it to do it anyway

i have 3 different client types.. hammer, jewel and luminous ... i don't know how to "kill" or "upgrade" those 6 hammer clients.. nor i can find them... can anyone help me please? i already restarted all servers but i still cannot get rid of those 6 clients.. how can i find them ?
 
thanks for you reply...
once i upgraded and restarted all nodes with proxmox 5.1 i was able to do the "ceph osd set-require-min-compat-client jewel"
thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!