[SOLVED] Ceph Pacific upgrade to Quincy osd boot problem

czechsys

Renowned Member
Nov 18, 2015
383
36
93
Hi,
i followed pve documentation on upgrading ceph pacific to quincy, so, full upgrade to last pve 7, reboot, change to quincy, upgrade, restart mons/mgrs/osds, all went ok. But after this i rebooted one upgraded host and osds doesn't come up:

Code:
Jul 11 02:31:03 proxmox-03 systemd[1]: Starting Ceph object storage daemon osd.4...
Jul 11 02:31:03 proxmox-03 systemd[1]: Started Ceph object storage daemon osd.4.
Jul 11 02:31:03 proxmox-03 ceph-osd[9053]: 2023-07-11T02:31:03.229+0200 7faa969183c0 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-4/keyring: (2) No such file or directory
Jul 11 02:31:03 proxmox-03 ceph-osd[9053]: 2023-07-11T02:31:03.229+0200 7faa969183c0 -1 AuthRegistry(0x55e57e5f8140) no keyring found at /var/lib/ceph/osd/ceph-4/keyring, disabling cephx
Jul 11 02:31:03 proxmox-03 ceph-osd[9053]: 2023-07-11T02:31:03.233+0200 7faa969183c0 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-4/keyring: (2) No such file or directory
Jul 11 02:31:03 proxmox-03 ceph-osd[9053]: 2023-07-11T02:31:03.233+0200 7faa969183c0 -1 AuthRegistry(0x7ffd31921e90) no keyring found at /var/lib/ceph/osd/ceph-4/keyring, disabling cephx
Jul 11 02:31:03 proxmox-03 ceph-osd[9053]: failed to fetch mon config (--no-mon-config to skip)
Jul 11 02:31:03 proxmox-03 systemd[1]: ceph-osd@4.service: Main process exited, code=exited, status=1/FAILURE
Jul 11 02:31:03 proxmox-03 systemd[1]: ceph-osd@4.service: Failed with result 'exit-code'.
Jul 11 02:31:13 proxmox-03 systemd[1]: ceph-osd@4.service: Scheduled restart job, restart counter is at 3.
Jul 11 02:31:13 proxmox-03 systemd[1]: Stopped Ceph object storage daemon osd.4.
Jul 11 02:31:13 proxmox-03 systemd[1]: ceph-osd@4.service: Start request repeated too quickly.
Jul 11 02:31:13 proxmox-03 systemd[1]: ceph-osd@4.service: Failed with result 'exit-code'.
Jul 11 02:31:13 proxmox-03 systemd[1]: Failed to start Ceph object storage daemon osd.4.

Code:
root@proxmox-03:/var/log# lvs
File descriptor 9 (pipe:[101154]) leaked on lvs invocation. Parent PID 15233: bash
File descriptor 11 (pipe:[101155]) leaked on lvs invocation. Parent PID 15233: bash
  LV                                             VG                                        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  osd-block-a72ae29a-e413-4848-bc70-c2c76563a567 ceph-220d4477-940f-4970-b3a7-1836b88b11fc -wi-a-----  <1.75t
  osd-block-2ff9ed5f-075a-4e42-bc22-da303006b98c ceph-4962b11b-3aaf-4a65-9c1f-c66cb9a4dbd8 -wi-a-----  <1.75t

Code:
root@proxmox-03:/var/log# ceph -s
  cluster:
    id:     99c06983-733f-482b-b59e-ad21f55c20e3
    health: HEALTH_WARN
            Degraded data redundancy: 200020/600060 objects degraded (33.333%), 129 pgs degraded, 129 pgs undersized

  services:
    mon: 3 daemons, quorum proxmox-01,proxmox-02,proxmox-03 (age 42m)
    mgr: proxmox-02(active, since 54m), standbys: proxmox-01, proxmox-03
    osd: 6 osds: 4 up (since 55m), 4 in (since 21m)

  data:
    pools:   2 pools, 129 pgs
    objects: 200.02k objects, 743 GiB
    usage:   1.2 TiB used, 5.8 TiB / 7.0 TiB avail
    pgs:     200020/600060 objects degraded (33.333%)
             129 active+undersized+degraded

How to get osds up? The /var/lib/ceph/osd/ceph-X directory is empty.
 
Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.108-1-pve)
pve-manager: 7.4-15 (running version: 7.4-15/a5d2a31e)
pve-kernel-5.15: 7.4-4
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
ceph: 17.2.6-pve1
ceph-fuse: 17.2.6-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve2
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
openvswitch-switch: 2.15.0+ds1-2+deb11u4
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-4
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!